Test Report: KVM_Linux_crio 19326

                    
                      35e58bd4f2346c2fce1feaa9162990386c1fdc2b:2024-07-25:35495
                    
                

Test fail (30/322)

Order failed test Duration
43 TestAddons/parallel/Ingress 152.04
45 TestAddons/parallel/MetricsServer 367.17
54 TestAddons/StoppedEnableDisable 154.44
173 TestMultiControlPlane/serial/StopSecondaryNode 141.76
175 TestMultiControlPlane/serial/RestartSecondaryNode 49.47
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 421.9
180 TestMultiControlPlane/serial/StopCluster 141.7
240 TestMultiNode/serial/RestartKeepsNodes 324.06
242 TestMultiNode/serial/StopMultiNode 141.12
249 TestPreload 181.12
257 TestKubernetesUpgrade 447.14
280 TestPause/serial/SecondStartNoReconfiguration 66.87
294 TestStartStop/group/old-k8s-version/serial/FirstStart 295.04
301 TestStartStop/group/no-preload/serial/Stop 139.22
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.57
312 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 98.28
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
326 TestStartStop/group/embed-certs/serial/Stop 139.07
329 TestStartStop/group/old-k8s-version/serial/SecondStart 749.14
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.19
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.09
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.19
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.44
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 466.81
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 542.63
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 310.37
339 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 112.54
x
+
TestAddons/parallel/Ingress (152.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-377932 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-377932 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-377932 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [749c6e52-1618-4a00-9ab0-3f73733eccb3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [749c6e52-1618-4a00-9ab0-3f73733eccb3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00410803s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-377932 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.836135841s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-377932 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.150
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable ingress-dns --alsologtostderr -v=1: (1.717743316s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable ingress --alsologtostderr -v=1: (7.659680638s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-377932 -n addons-377932
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 logs -n 25: (1.147596176s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-170797                                                                     | download-only-170797 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| delete  | -p download-only-108558                                                                     | download-only-108558 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-606783 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | binary-mirror-606783                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44459                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-606783                                                                     | binary-mirror-606783 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-377932 --wait=true                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| ip      | addons-377932 ip                                                                            | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-377932 ssh curl -s                                                                   | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-377932 ssh cat                                                                       | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | /opt/local-path-provisioner/pvc-21933440-c7fa-4b82-89b2-60e7bd69bee6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-377932 addons                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-377932 addons                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | -p addons-377932                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | -p addons-377932                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-377932 ip                                                                            | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:29:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:29:35.483663   14037 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:29:35.483933   14037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:35.483943   14037 out.go:304] Setting ErrFile to fd 2...
	I0725 17:29:35.483949   14037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:35.484123   14037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:29:35.484759   14037 out.go:298] Setting JSON to false
	I0725 17:29:35.485558   14037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":719,"bootTime":1721927856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:29:35.485613   14037 start.go:139] virtualization: kvm guest
	I0725 17:29:35.487628   14037 out.go:177] * [addons-377932] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:29:35.489115   14037 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:29:35.489126   14037 notify.go:220] Checking for updates...
	I0725 17:29:35.491505   14037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:29:35.492583   14037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:29:35.493766   14037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:35.495091   14037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:29:35.496263   14037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:29:35.497460   14037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:29:35.528353   14037 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 17:29:35.529444   14037 start.go:297] selected driver: kvm2
	I0725 17:29:35.529455   14037 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:29:35.529465   14037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:29:35.530104   14037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:35.530169   14037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:29:35.544383   14037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:29:35.544429   14037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:29:35.544669   14037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:29:35.544726   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:29:35.544744   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:29:35.544760   14037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 17:29:35.544807   14037 start.go:340] cluster config:
	{Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:29:35.544914   14037 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:35.546565   14037 out.go:177] * Starting "addons-377932" primary control-plane node in "addons-377932" cluster
	I0725 17:29:35.547631   14037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:35.547658   14037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:29:35.547665   14037 cache.go:56] Caching tarball of preloaded images
	I0725 17:29:35.547729   14037 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:29:35.547738   14037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:29:35.548013   14037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json ...
	I0725 17:29:35.548029   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json: {Name:mka8eb86bdc511d9930f24e5d458457e2aefedee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:29:35.548138   14037 start.go:360] acquireMachinesLock for addons-377932: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:29:35.548175   14037 start.go:364] duration metric: took 26.578µs to acquireMachinesLock for "addons-377932"
	I0725 17:29:35.548191   14037 start.go:93] Provisioning new machine with config: &{Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:29:35.548239   14037 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 17:29:35.549654   14037 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0725 17:29:35.549762   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:29:35.549795   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:29:35.563619   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0725 17:29:35.564008   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:29:35.564513   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:29:35.564532   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:29:35.564939   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:29:35.565120   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:35.565296   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:35.565448   14037 start.go:159] libmachine.API.Create for "addons-377932" (driver="kvm2")
	I0725 17:29:35.565526   14037 client.go:168] LocalClient.Create starting
	I0725 17:29:35.565566   14037 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:29:35.971168   14037 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:29:36.120642   14037 main.go:141] libmachine: Running pre-create checks...
	I0725 17:29:36.120664   14037 main.go:141] libmachine: (addons-377932) Calling .PreCreateCheck
	I0725 17:29:36.121268   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:36.121744   14037 main.go:141] libmachine: Creating machine...
	I0725 17:29:36.121758   14037 main.go:141] libmachine: (addons-377932) Calling .Create
	I0725 17:29:36.121970   14037 main.go:141] libmachine: (addons-377932) Creating KVM machine...
	I0725 17:29:36.123219   14037 main.go:141] libmachine: (addons-377932) DBG | found existing default KVM network
	I0725 17:29:36.124069   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.123916   14059 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001125f0}
	I0725 17:29:36.124096   14037 main.go:141] libmachine: (addons-377932) DBG | created network xml: 
	I0725 17:29:36.124111   14037 main.go:141] libmachine: (addons-377932) DBG | <network>
	I0725 17:29:36.124120   14037 main.go:141] libmachine: (addons-377932) DBG |   <name>mk-addons-377932</name>
	I0725 17:29:36.124130   14037 main.go:141] libmachine: (addons-377932) DBG |   <dns enable='no'/>
	I0725 17:29:36.124139   14037 main.go:141] libmachine: (addons-377932) DBG |   
	I0725 17:29:36.124155   14037 main.go:141] libmachine: (addons-377932) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 17:29:36.124182   14037 main.go:141] libmachine: (addons-377932) DBG |     <dhcp>
	I0725 17:29:36.124202   14037 main.go:141] libmachine: (addons-377932) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 17:29:36.124213   14037 main.go:141] libmachine: (addons-377932) DBG |     </dhcp>
	I0725 17:29:36.124220   14037 main.go:141] libmachine: (addons-377932) DBG |   </ip>
	I0725 17:29:36.124228   14037 main.go:141] libmachine: (addons-377932) DBG |   
	I0725 17:29:36.124239   14037 main.go:141] libmachine: (addons-377932) DBG | </network>
	I0725 17:29:36.124249   14037 main.go:141] libmachine: (addons-377932) DBG | 
	I0725 17:29:36.129539   14037 main.go:141] libmachine: (addons-377932) DBG | trying to create private KVM network mk-addons-377932 192.168.39.0/24...
	I0725 17:29:36.193986   14037 main.go:141] libmachine: (addons-377932) DBG | private KVM network mk-addons-377932 192.168.39.0/24 created
	I0725 17:29:36.194027   14037 main.go:141] libmachine: (addons-377932) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 ...
	I0725 17:29:36.194047   14037 main.go:141] libmachine: (addons-377932) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:29:36.194055   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.193979   14059 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:36.194123   14037 main.go:141] libmachine: (addons-377932) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:29:36.488956   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.488815   14059 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa...
	I0725 17:29:36.635362   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.635249   14059 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/addons-377932.rawdisk...
	I0725 17:29:36.635387   14037 main.go:141] libmachine: (addons-377932) DBG | Writing magic tar header
	I0725 17:29:36.635400   14037 main.go:141] libmachine: (addons-377932) DBG | Writing SSH key tar header
	I0725 17:29:36.635412   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.635359   14059 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 ...
	I0725 17:29:36.635472   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932
	I0725 17:29:36.635519   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:29:36.635543   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:36.635557   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 (perms=drwx------)
	I0725 17:29:36.635569   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:29:36.635596   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:29:36.635608   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:29:36.635618   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:29:36.635633   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:29:36.635646   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:29:36.635657   14037 main.go:141] libmachine: (addons-377932) Creating domain...
	I0725 17:29:36.635684   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:29:36.635702   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:29:36.635713   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home
	I0725 17:29:36.635725   14037 main.go:141] libmachine: (addons-377932) DBG | Skipping /home - not owner
	I0725 17:29:36.636506   14037 main.go:141] libmachine: (addons-377932) define libvirt domain using xml: 
	I0725 17:29:36.636524   14037 main.go:141] libmachine: (addons-377932) <domain type='kvm'>
	I0725 17:29:36.636534   14037 main.go:141] libmachine: (addons-377932)   <name>addons-377932</name>
	I0725 17:29:36.636546   14037 main.go:141] libmachine: (addons-377932)   <memory unit='MiB'>4000</memory>
	I0725 17:29:36.636557   14037 main.go:141] libmachine: (addons-377932)   <vcpu>2</vcpu>
	I0725 17:29:36.636564   14037 main.go:141] libmachine: (addons-377932)   <features>
	I0725 17:29:36.636573   14037 main.go:141] libmachine: (addons-377932)     <acpi/>
	I0725 17:29:36.636583   14037 main.go:141] libmachine: (addons-377932)     <apic/>
	I0725 17:29:36.636593   14037 main.go:141] libmachine: (addons-377932)     <pae/>
	I0725 17:29:36.636600   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.636608   14037 main.go:141] libmachine: (addons-377932)   </features>
	I0725 17:29:36.636615   14037 main.go:141] libmachine: (addons-377932)   <cpu mode='host-passthrough'>
	I0725 17:29:36.636621   14037 main.go:141] libmachine: (addons-377932)   
	I0725 17:29:36.636633   14037 main.go:141] libmachine: (addons-377932)   </cpu>
	I0725 17:29:36.636659   14037 main.go:141] libmachine: (addons-377932)   <os>
	I0725 17:29:36.636682   14037 main.go:141] libmachine: (addons-377932)     <type>hvm</type>
	I0725 17:29:36.636695   14037 main.go:141] libmachine: (addons-377932)     <boot dev='cdrom'/>
	I0725 17:29:36.636706   14037 main.go:141] libmachine: (addons-377932)     <boot dev='hd'/>
	I0725 17:29:36.636717   14037 main.go:141] libmachine: (addons-377932)     <bootmenu enable='no'/>
	I0725 17:29:36.636731   14037 main.go:141] libmachine: (addons-377932)   </os>
	I0725 17:29:36.636759   14037 main.go:141] libmachine: (addons-377932)   <devices>
	I0725 17:29:36.636783   14037 main.go:141] libmachine: (addons-377932)     <disk type='file' device='cdrom'>
	I0725 17:29:36.636803   14037 main.go:141] libmachine: (addons-377932)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/boot2docker.iso'/>
	I0725 17:29:36.636814   14037 main.go:141] libmachine: (addons-377932)       <target dev='hdc' bus='scsi'/>
	I0725 17:29:36.636826   14037 main.go:141] libmachine: (addons-377932)       <readonly/>
	I0725 17:29:36.636836   14037 main.go:141] libmachine: (addons-377932)     </disk>
	I0725 17:29:36.636849   14037 main.go:141] libmachine: (addons-377932)     <disk type='file' device='disk'>
	I0725 17:29:36.636865   14037 main.go:141] libmachine: (addons-377932)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:29:36.636882   14037 main.go:141] libmachine: (addons-377932)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/addons-377932.rawdisk'/>
	I0725 17:29:36.636893   14037 main.go:141] libmachine: (addons-377932)       <target dev='hda' bus='virtio'/>
	I0725 17:29:36.636904   14037 main.go:141] libmachine: (addons-377932)     </disk>
	I0725 17:29:36.636914   14037 main.go:141] libmachine: (addons-377932)     <interface type='network'>
	I0725 17:29:36.636927   14037 main.go:141] libmachine: (addons-377932)       <source network='mk-addons-377932'/>
	I0725 17:29:36.636941   14037 main.go:141] libmachine: (addons-377932)       <model type='virtio'/>
	I0725 17:29:36.636953   14037 main.go:141] libmachine: (addons-377932)     </interface>
	I0725 17:29:36.636963   14037 main.go:141] libmachine: (addons-377932)     <interface type='network'>
	I0725 17:29:36.636975   14037 main.go:141] libmachine: (addons-377932)       <source network='default'/>
	I0725 17:29:36.636985   14037 main.go:141] libmachine: (addons-377932)       <model type='virtio'/>
	I0725 17:29:36.636996   14037 main.go:141] libmachine: (addons-377932)     </interface>
	I0725 17:29:36.637009   14037 main.go:141] libmachine: (addons-377932)     <serial type='pty'>
	I0725 17:29:36.637021   14037 main.go:141] libmachine: (addons-377932)       <target port='0'/>
	I0725 17:29:36.637031   14037 main.go:141] libmachine: (addons-377932)     </serial>
	I0725 17:29:36.637042   14037 main.go:141] libmachine: (addons-377932)     <console type='pty'>
	I0725 17:29:36.637054   14037 main.go:141] libmachine: (addons-377932)       <target type='serial' port='0'/>
	I0725 17:29:36.637065   14037 main.go:141] libmachine: (addons-377932)     </console>
	I0725 17:29:36.637078   14037 main.go:141] libmachine: (addons-377932)     <rng model='virtio'>
	I0725 17:29:36.637092   14037 main.go:141] libmachine: (addons-377932)       <backend model='random'>/dev/random</backend>
	I0725 17:29:36.637103   14037 main.go:141] libmachine: (addons-377932)     </rng>
	I0725 17:29:36.637113   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.637123   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.637134   14037 main.go:141] libmachine: (addons-377932)   </devices>
	I0725 17:29:36.637144   14037 main.go:141] libmachine: (addons-377932) </domain>
	I0725 17:29:36.637157   14037 main.go:141] libmachine: (addons-377932) 
	I0725 17:29:36.642609   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:f5:2a:49 in network default
	I0725 17:29:36.643102   14037 main.go:141] libmachine: (addons-377932) Ensuring networks are active...
	I0725 17:29:36.643127   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:36.643638   14037 main.go:141] libmachine: (addons-377932) Ensuring network default is active
	I0725 17:29:36.643911   14037 main.go:141] libmachine: (addons-377932) Ensuring network mk-addons-377932 is active
	I0725 17:29:36.644358   14037 main.go:141] libmachine: (addons-377932) Getting domain xml...
	I0725 17:29:36.644924   14037 main.go:141] libmachine: (addons-377932) Creating domain...
	I0725 17:29:38.031137   14037 main.go:141] libmachine: (addons-377932) Waiting to get IP...
	I0725 17:29:38.031801   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.032127   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.032154   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.032077   14059 retry.go:31] will retry after 198.348494ms: waiting for machine to come up
	I0725 17:29:38.232504   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.232870   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.232898   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.232823   14059 retry.go:31] will retry after 371.403368ms: waiting for machine to come up
	I0725 17:29:38.605211   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.605569   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.605590   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.605536   14059 retry.go:31] will retry after 391.428532ms: waiting for machine to come up
	I0725 17:29:38.998030   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.998506   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.998534   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.998443   14059 retry.go:31] will retry after 559.487337ms: waiting for machine to come up
	I0725 17:29:39.559175   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:39.559530   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:39.559558   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:39.559502   14059 retry.go:31] will retry after 656.233772ms: waiting for machine to come up
	I0725 17:29:40.216859   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:40.217419   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:40.217439   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:40.217375   14059 retry.go:31] will retry after 657.72817ms: waiting for machine to come up
	I0725 17:29:40.876932   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:40.877423   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:40.877450   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:40.877375   14059 retry.go:31] will retry after 1.10158035s: waiting for machine to come up
	I0725 17:29:41.980613   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:41.981069   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:41.981098   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:41.981029   14059 retry.go:31] will retry after 1.319598156s: waiting for machine to come up
	I0725 17:29:43.302764   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:43.303193   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:43.303219   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:43.303139   14059 retry.go:31] will retry after 1.160376448s: waiting for machine to come up
	I0725 17:29:44.465308   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:44.465605   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:44.465626   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:44.465569   14059 retry.go:31] will retry after 2.267893376s: waiting for machine to come up
	I0725 17:29:46.735888   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:46.736393   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:46.736422   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:46.736340   14059 retry.go:31] will retry after 2.844725176s: waiting for machine to come up
	I0725 17:29:49.582437   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:49.582883   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:49.582909   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:49.582814   14059 retry.go:31] will retry after 2.873112905s: waiting for machine to come up
	I0725 17:29:52.458443   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:52.458945   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:52.458970   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:52.458910   14059 retry.go:31] will retry after 3.065951913s: waiting for machine to come up
	I0725 17:29:55.528120   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.528556   14037 main.go:141] libmachine: (addons-377932) Found IP for machine: 192.168.39.150
	I0725 17:29:55.528576   14037 main.go:141] libmachine: (addons-377932) Reserving static IP address...
	I0725 17:29:55.528589   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has current primary IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.528991   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find host DHCP lease matching {name: "addons-377932", mac: "52:54:00:b4:a8:62", ip: "192.168.39.150"} in network mk-addons-377932
	I0725 17:29:55.598128   14037 main.go:141] libmachine: (addons-377932) DBG | Getting to WaitForSSH function...
	I0725 17:29:55.598158   14037 main.go:141] libmachine: (addons-377932) Reserved static IP address: 192.168.39.150
	I0725 17:29:55.598182   14037 main.go:141] libmachine: (addons-377932) Waiting for SSH to be available...
	I0725 17:29:55.600769   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.601146   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.601176   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.601327   14037 main.go:141] libmachine: (addons-377932) DBG | Using SSH client type: external
	I0725 17:29:55.601356   14037 main.go:141] libmachine: (addons-377932) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa (-rw-------)
	I0725 17:29:55.601385   14037 main.go:141] libmachine: (addons-377932) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:29:55.601399   14037 main.go:141] libmachine: (addons-377932) DBG | About to run SSH command:
	I0725 17:29:55.601410   14037 main.go:141] libmachine: (addons-377932) DBG | exit 0
	I0725 17:29:55.732227   14037 main.go:141] libmachine: (addons-377932) DBG | SSH cmd err, output: <nil>: 
	I0725 17:29:55.732555   14037 main.go:141] libmachine: (addons-377932) KVM machine creation complete!
	I0725 17:29:55.732885   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:55.733436   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:55.733697   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:55.733874   14037 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:29:55.733889   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:29:55.735248   14037 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:29:55.735261   14037 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:29:55.735266   14037 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:29:55.735272   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.737279   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.737647   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.737677   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.737844   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.738057   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.738206   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.738336   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.738497   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.738664   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.738675   14037 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:29:55.843347   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:29:55.843371   14037 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:29:55.843380   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.846082   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.846408   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.846430   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.846560   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.846752   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.846909   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.847039   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.847196   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.847379   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.847390   14037 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:29:55.948504   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:29:55.948553   14037 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:29:55.948571   14037 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:29:55.948580   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:55.948823   14037 buildroot.go:166] provisioning hostname "addons-377932"
	I0725 17:29:55.948847   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:55.949027   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.952038   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.952422   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.952449   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.952576   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.952752   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.952927   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.953169   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.953330   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.953527   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.953541   14037 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-377932 && echo "addons-377932" | sudo tee /etc/hostname
	I0725 17:29:56.071546   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-377932
	
	I0725 17:29:56.071576   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.074203   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.074540   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.074571   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.074740   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.074906   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.075049   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.075223   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.075423   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.075586   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.075601   14037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377932/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:29:56.189757   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:29:56.189792   14037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:29:56.189837   14037 buildroot.go:174] setting up certificates
	I0725 17:29:56.189848   14037 provision.go:84] configureAuth start
	I0725 17:29:56.189860   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:56.190175   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:56.192805   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.193166   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.193191   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.193340   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.195256   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.195522   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.195545   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.195662   14037 provision.go:143] copyHostCerts
	I0725 17:29:56.195743   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:29:56.195862   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:29:56.195921   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:29:56.195968   14037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.addons-377932 san=[127.0.0.1 192.168.39.150 addons-377932 localhost minikube]
	I0725 17:29:56.430674   14037 provision.go:177] copyRemoteCerts
	I0725 17:29:56.430734   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:29:56.430755   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.433411   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.433736   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.433764   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.433900   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.434110   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.434337   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.434463   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:56.514117   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:29:56.536635   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:29:56.557659   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:29:56.578634   14037 provision.go:87] duration metric: took 388.772402ms to configureAuth
	I0725 17:29:56.578659   14037 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:29:56.578826   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:29:56.578906   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.581591   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.581910   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.581931   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.582078   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.582274   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.582425   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.582653   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.582785   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.582974   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.582990   14037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:29:56.853914   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:29:56.853954   14037 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:29:56.853967   14037 main.go:141] libmachine: (addons-377932) Calling .GetURL
	I0725 17:29:56.855204   14037 main.go:141] libmachine: (addons-377932) DBG | Using libvirt version 6000000
	I0725 17:29:56.857423   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.857740   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.857766   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.857920   14037 main.go:141] libmachine: Docker is up and running!
	I0725 17:29:56.857936   14037 main.go:141] libmachine: Reticulating splines...
	I0725 17:29:56.857945   14037 client.go:171] duration metric: took 21.292406546s to LocalClient.Create
	I0725 17:29:56.857971   14037 start.go:167] duration metric: took 21.292528939s to libmachine.API.Create "addons-377932"
	I0725 17:29:56.857984   14037 start.go:293] postStartSetup for "addons-377932" (driver="kvm2")
	I0725 17:29:56.857997   14037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:29:56.858017   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:56.858246   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:29:56.858276   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.860817   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.861152   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.861175   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.861293   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.861497   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.861661   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.861799   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:56.941980   14037 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:29:56.945875   14037 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:29:56.945894   14037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:29:56.945965   14037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:29:56.945987   14037 start.go:296] duration metric: took 87.998176ms for postStartSetup
	I0725 17:29:56.946017   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:56.946540   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:56.949001   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.949409   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.949439   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.949767   14037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json ...
	I0725 17:29:56.949973   14037 start.go:128] duration metric: took 21.401723832s to createHost
	I0725 17:29:56.949996   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.952743   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.953087   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.953108   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.953233   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.953417   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.953561   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.953721   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.953880   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.954031   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.954040   14037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:29:57.060703   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721928597.033917637
	
	I0725 17:29:57.060725   14037 fix.go:216] guest clock: 1721928597.033917637
	I0725 17:29:57.060733   14037 fix.go:229] Guest: 2024-07-25 17:29:57.033917637 +0000 UTC Remote: 2024-07-25 17:29:56.949984849 +0000 UTC m=+21.498950979 (delta=83.932788ms)
	I0725 17:29:57.060777   14037 fix.go:200] guest clock delta is within tolerance: 83.932788ms
	I0725 17:29:57.060783   14037 start.go:83] releasing machines lock for "addons-377932", held for 21.512599051s
	I0725 17:29:57.060804   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.061049   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:57.063861   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.064183   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.064202   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.064391   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.064871   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.065115   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.065208   14037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:29:57.065246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:57.065324   14037 ssh_runner.go:195] Run: cat /version.json
	I0725 17:29:57.065340   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:57.067884   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.067980   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068317   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.068361   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068383   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.068395   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068557   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:57.068659   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:57.068821   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:57.068840   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:57.068956   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:57.068969   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:57.069075   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:57.069080   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:57.192662   14037 ssh_runner.go:195] Run: systemctl --version
	I0725 17:29:57.198592   14037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:29:57.347612   14037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:29:57.353347   14037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:29:57.353431   14037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:29:57.367887   14037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:29:57.367911   14037 start.go:495] detecting cgroup driver to use...
	I0725 17:29:57.367981   14037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:29:57.382431   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:29:57.395385   14037 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:29:57.395448   14037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:29:57.408459   14037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:29:57.422925   14037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:29:57.549552   14037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:29:57.699999   14037 docker.go:233] disabling docker service ...
	I0725 17:29:57.700069   14037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:29:57.713255   14037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:29:57.725340   14037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:29:57.839839   14037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:29:57.954689   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:29:57.972570   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:29:57.989913   14037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:29:57.989980   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:57.999476   14037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:29:57.999541   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.009406   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.020292   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.029892   14037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:29:58.039611   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.048918   14037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.063959   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.073462   14037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:29:58.082074   14037 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:29:58.082125   14037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:29:58.093842   14037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:29:58.102684   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:29:58.209093   14037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:29:58.334896   14037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:29:58.334984   14037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:29:58.339238   14037 start.go:563] Will wait 60s for crictl version
	I0725 17:29:58.339301   14037 ssh_runner.go:195] Run: which crictl
	I0725 17:29:58.342595   14037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:29:58.378421   14037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:29:58.378516   14037 ssh_runner.go:195] Run: crio --version
	I0725 17:29:58.405153   14037 ssh_runner.go:195] Run: crio --version
	I0725 17:29:58.434375   14037 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:29:58.435799   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:58.438439   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:58.438772   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:58.438797   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:58.439073   14037 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:29:58.442923   14037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:29:58.454764   14037 kubeadm.go:883] updating cluster {Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:29:58.454865   14037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:58.454907   14037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:29:58.484834   14037 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 17:29:58.484897   14037 ssh_runner.go:195] Run: which lz4
	I0725 17:29:58.488525   14037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 17:29:58.492306   14037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 17:29:58.492352   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 17:29:59.619950   14037 crio.go:462] duration metric: took 1.131449747s to copy over tarball
	I0725 17:29:59.620025   14037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 17:30:01.853326   14037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.233273989s)
	I0725 17:30:01.853361   14037 crio.go:469] duration metric: took 2.233384178s to extract the tarball
	I0725 17:30:01.853368   14037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 17:30:01.890983   14037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:30:01.934697   14037 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:30:01.934720   14037 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:30:01.934729   14037 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.30.3 crio true true} ...
	I0725 17:30:01.934856   14037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-377932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:30:01.934934   14037 ssh_runner.go:195] Run: crio config
	I0725 17:30:01.985104   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:30:01.985125   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:30:01.985137   14037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:30:01.985157   14037 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377932 NodeName:addons-377932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:30:01.985284   14037 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377932"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:30:01.985341   14037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:30:01.995435   14037 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:30:01.995507   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:30:02.004650   14037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:30:02.020294   14037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:30:02.035791   14037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0725 17:30:02.050739   14037 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0725 17:30:02.054487   14037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:30:02.065514   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:30:02.177851   14037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:30:02.193963   14037 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932 for IP: 192.168.39.150
	I0725 17:30:02.193990   14037 certs.go:194] generating shared ca certs ...
	I0725 17:30:02.194009   14037 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.194181   14037 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:30:02.356378   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt ...
	I0725 17:30:02.356409   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt: {Name:mk4dbfb6c929c0f89f5410dfe7f5a6ded2c7abbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.356632   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key ...
	I0725 17:30:02.356650   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key: {Name:mk4e33c2ec36f72504eaacd6c4453cec5f6a0fdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.356770   14037 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:30:02.591810   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt ...
	I0725 17:30:02.591844   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt: {Name:mk9d6644fd5c0d5e0ce0a831a082f277ae778296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.592032   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key ...
	I0725 17:30:02.592044   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key: {Name:mk66d8bc8e5de2f635608853f8a33928fea3e40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.592116   14037 certs.go:256] generating profile certs ...
	I0725 17:30:02.592169   14037 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key
	I0725 17:30:02.592184   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt with IP's: []
	I0725 17:30:02.913501   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt ...
	I0725 17:30:02.913533   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: {Name:mkf02e505348e429a8c13e822a6b4978fc12c96e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.913709   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key ...
	I0725 17:30:02.913721   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key: {Name:mkcc39f27e83991ea55ff0cd42be2c158789e3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.913797   14037 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb
	I0725 17:30:02.913817   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150]
	I0725 17:30:03.101135   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb ...
	I0725 17:30:03.101164   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb: {Name:mka12d82691d7fddeaa9f79458083ad330ae80e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.101323   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb ...
	I0725 17:30:03.101336   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb: {Name:mk4c01ac180141516918736912bcc92e918f5599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.101402   14037 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt
	I0725 17:30:03.101476   14037 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key
	I0725 17:30:03.101521   14037 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key
	I0725 17:30:03.101538   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt with IP's: []
	I0725 17:30:03.186946   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt ...
	I0725 17:30:03.186974   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt: {Name:mk94acbb828e3670ee4984e84cb9a6002a81e64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.187175   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key ...
	I0725 17:30:03.187190   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key: {Name:mkf228049e0f765d2437faa2c80c2a597524df60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.187371   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:30:03.187403   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:30:03.187429   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:30:03.187451   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:30:03.188021   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:30:03.214966   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:30:03.242170   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:30:03.269132   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:30:03.295269   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 17:30:03.318933   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:30:03.341333   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:30:03.363394   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:30:03.385295   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:30:03.407436   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:30:03.423401   14037 ssh_runner.go:195] Run: openssl version
	I0725 17:30:03.429810   14037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:30:03.440419   14037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.444571   14037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.444626   14037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.450185   14037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:30:03.460783   14037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:30:03.465066   14037 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:30:03.465126   14037 kubeadm.go:392] StartCluster: {Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:30:03.465238   14037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:30:03.465293   14037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:30:03.498504   14037 cri.go:89] found id: ""
	I0725 17:30:03.498576   14037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:30:03.508358   14037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:30:03.517827   14037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:30:03.526652   14037 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:30:03.526676   14037 kubeadm.go:157] found existing configuration files:
	
	I0725 17:30:03.526722   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:30:03.535418   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 17:30:03.535491   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 17:30:03.544526   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:30:03.552818   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 17:30:03.552873   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 17:30:03.561886   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:30:03.570410   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 17:30:03.570460   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:30:03.579110   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:30:03.587429   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 17:30:03.587491   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:30:03.596300   14037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 17:30:03.776546   14037 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 17:30:13.711710   14037 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 17:30:13.711780   14037 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 17:30:13.711899   14037 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 17:30:13.712001   14037 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 17:30:13.712088   14037 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 17:30:13.712183   14037 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 17:30:13.714417   14037 out.go:204]   - Generating certificates and keys ...
	I0725 17:30:13.714520   14037 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 17:30:13.714609   14037 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 17:30:13.714701   14037 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 17:30:13.714760   14037 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 17:30:13.714808   14037 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 17:30:13.714851   14037 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 17:30:13.714903   14037 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 17:30:13.715068   14037 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-377932 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0725 17:30:13.715143   14037 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 17:30:13.715299   14037 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-377932 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0725 17:30:13.715458   14037 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 17:30:13.715556   14037 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 17:30:13.715618   14037 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 17:30:13.715700   14037 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 17:30:13.715774   14037 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 17:30:13.715828   14037 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 17:30:13.715881   14037 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 17:30:13.715978   14037 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 17:30:13.716042   14037 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 17:30:13.716118   14037 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 17:30:13.716173   14037 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 17:30:13.717526   14037 out.go:204]   - Booting up control plane ...
	I0725 17:30:13.717614   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 17:30:13.717675   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 17:30:13.717729   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 17:30:13.717815   14037 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 17:30:13.717884   14037 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 17:30:13.717934   14037 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 17:30:13.718121   14037 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 17:30:13.718220   14037 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 17:30:13.718274   14037 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.28175ms
	I0725 17:30:13.718350   14037 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 17:30:13.718410   14037 kubeadm.go:310] [api-check] The API server is healthy after 5.501270616s
	I0725 17:30:13.718501   14037 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 17:30:13.718601   14037 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 17:30:13.718656   14037 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 17:30:13.718802   14037 kubeadm.go:310] [mark-control-plane] Marking the node addons-377932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 17:30:13.718861   14037 kubeadm.go:310] [bootstrap-token] Using token: kzvuql.b3y2zkhnhyb7z65l
	I0725 17:30:13.720239   14037 out.go:204]   - Configuring RBAC rules ...
	I0725 17:30:13.720387   14037 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 17:30:13.720463   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 17:30:13.720578   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 17:30:13.720674   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 17:30:13.720766   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 17:30:13.720831   14037 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 17:30:13.720942   14037 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 17:30:13.720977   14037 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 17:30:13.721014   14037 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 17:30:13.721019   14037 kubeadm.go:310] 
	I0725 17:30:13.721063   14037 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 17:30:13.721069   14037 kubeadm.go:310] 
	I0725 17:30:13.721128   14037 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 17:30:13.721136   14037 kubeadm.go:310] 
	I0725 17:30:13.721161   14037 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 17:30:13.721252   14037 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 17:30:13.721412   14037 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 17:30:13.721443   14037 kubeadm.go:310] 
	I0725 17:30:13.721530   14037 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 17:30:13.721557   14037 kubeadm.go:310] 
	I0725 17:30:13.721624   14037 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 17:30:13.721644   14037 kubeadm.go:310] 
	I0725 17:30:13.721727   14037 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 17:30:13.721799   14037 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 17:30:13.721866   14037 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 17:30:13.721873   14037 kubeadm.go:310] 
	I0725 17:30:13.721963   14037 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 17:30:13.722088   14037 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 17:30:13.722101   14037 kubeadm.go:310] 
	I0725 17:30:13.722208   14037 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kzvuql.b3y2zkhnhyb7z65l \
	I0725 17:30:13.722352   14037 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 17:30:13.722377   14037 kubeadm.go:310] 	--control-plane 
	I0725 17:30:13.722393   14037 kubeadm.go:310] 
	I0725 17:30:13.722501   14037 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 17:30:13.722512   14037 kubeadm.go:310] 
	I0725 17:30:13.722618   14037 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kzvuql.b3y2zkhnhyb7z65l \
	I0725 17:30:13.722712   14037 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 17:30:13.722744   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:30:13.722756   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:30:13.724409   14037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 17:30:13.725689   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 17:30:13.736164   14037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 17:30:13.753135   14037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:30:13.753221   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:13.753251   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377932 minikube.k8s.io/updated_at=2024_07_25T17_30_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=addons-377932 minikube.k8s.io/primary=true
	I0725 17:30:13.771636   14037 ops.go:34] apiserver oom_adj: -16
	I0725 17:30:13.932222   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:14.432846   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:14.932535   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:15.432979   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:15.933055   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:16.432473   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:16.932766   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:17.432505   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:17.933274   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:18.432934   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:18.933196   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:19.433077   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:19.932599   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:20.432972   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:20.932923   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:21.432286   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:21.933277   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:22.432425   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:22.932503   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:23.432733   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:23.932576   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:24.432996   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:24.932595   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:25.432397   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:25.932523   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:26.432818   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:26.508179   14037 kubeadm.go:1113] duration metric: took 12.755028778s to wait for elevateKubeSystemPrivileges
	I0725 17:30:26.508217   14037 kubeadm.go:394] duration metric: took 23.043092848s to StartCluster
	I0725 17:30:26.508239   14037 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:26.508376   14037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:30:26.508749   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:26.508938   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:30:26.508967   14037 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:30:26.509037   14037 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0725 17:30:26.509115   14037 addons.go:69] Setting yakd=true in profile "addons-377932"
	I0725 17:30:26.509119   14037 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-377932"
	I0725 17:30:26.509146   14037 addons.go:234] Setting addon yakd=true in "addons-377932"
	I0725 17:30:26.509173   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509214   14037 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-377932"
	I0725 17:30:26.509221   14037 addons.go:69] Setting registry=true in profile "addons-377932"
	I0725 17:30:26.509206   14037 addons.go:69] Setting helm-tiller=true in profile "addons-377932"
	I0725 17:30:26.509226   14037 addons.go:69] Setting metrics-server=true in profile "addons-377932"
	I0725 17:30:26.509264   14037 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-377932"
	I0725 17:30:26.509267   14037 addons.go:69] Setting default-storageclass=true in profile "addons-377932"
	I0725 17:30:26.509271   14037 addons.go:69] Setting gcp-auth=true in profile "addons-377932"
	I0725 17:30:26.509288   14037 mustload.go:65] Loading cluster: addons-377932
	I0725 17:30:26.509289   14037 addons.go:234] Setting addon helm-tiller=true in "addons-377932"
	I0725 17:30:26.509301   14037 addons.go:69] Setting ingress=true in profile "addons-377932"
	I0725 17:30:26.509314   14037 addons.go:69] Setting volcano=true in profile "addons-377932"
	I0725 17:30:26.509333   14037 addons.go:234] Setting addon ingress=true in "addons-377932"
	I0725 17:30:26.509804   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509327   14037 addons.go:69] Setting volumesnapshots=true in profile "addons-377932"
	I0725 17:30:26.509892   14037 addons.go:69] Setting inspektor-gadget=true in profile "addons-377932"
	I0725 17:30:26.509256   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509932   14037 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-377932"
	I0725 17:30:26.509173   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:30:26.509141   14037 addons.go:69] Setting cloud-spanner=true in profile "addons-377932"
	I0725 17:30:26.509302   14037 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-377932"
	I0725 17:30:26.510015   14037 addons.go:234] Setting addon cloud-spanner=true in "addons-377932"
	I0725 17:30:26.510043   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510053   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510086   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510468   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510546   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:30:26.510566   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510596   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510882   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510903   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510933   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510954   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510948   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510988   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.511012   14037 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377932"
	I0725 17:30:26.509315   14037 addons.go:69] Setting ingress-dns=true in profile "addons-377932"
	I0725 17:30:26.511456   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.511563   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510501   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.511467   14037 addons.go:234] Setting addon ingress-dns=true in "addons-377932"
	I0725 17:30:26.509342   14037 addons.go:234] Setting addon volcano=true in "addons-377932"
	I0725 17:30:26.512027   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512044   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.512083   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510548   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.509292   14037 addons.go:234] Setting addon metrics-server=true in "addons-377932"
	I0725 17:30:26.509302   14037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-377932"
	I0725 17:30:26.509919   14037 addons.go:234] Setting addon inspektor-gadget=true in "addons-377932"
	I0725 17:30:26.512379   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509919   14037 addons.go:234] Setting addon volumesnapshots=true in "addons-377932"
	I0725 17:30:26.512564   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510501   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512745   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512794   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512747   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512877   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.509257   14037 addons.go:234] Setting addon registry=true in "addons-377932"
	I0725 17:30:26.509189   14037 addons.go:69] Setting storage-provisioner=true in profile "addons-377932"
	I0725 17:30:26.512891   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512930   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512990   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513039   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513064   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513075   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513090   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513086   14037 addons.go:234] Setting addon storage-provisioner=true in "addons-377932"
	I0725 17:30:26.513340   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.513848   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513884   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.530410   14037 out.go:177] * Verifying Kubernetes components...
	I0725 17:30:26.530449   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.531156   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.531413   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.531451   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.531597   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.531615   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.532432   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0725 17:30:26.532715   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:30:26.534701   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0725 17:30:26.534712   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.534797   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0725 17:30:26.540880   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0725 17:30:26.541339   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.541369   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.541462   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.542056   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.542239   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.542259   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.542947   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.542999   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.543491   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.543552   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0725 17:30:26.543713   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.543889   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.544367   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.544387   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.544712   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.548030   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0725 17:30:26.550617   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0725 17:30:26.551058   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.553261   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0725 17:30:26.553836   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44543
	I0725 17:30:26.558597   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0725 17:30:26.559513   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.565376   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0725 17:30:26.566084   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.566120   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.566426   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566665   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.566679   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.566725   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566747   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566790   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566820   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566928   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.566937   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.568064   14037 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-377932"
	I0725 17:30:26.568115   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.568515   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.568545   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.569360   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.569439   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569457   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569519   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569532   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569589   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569602   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569652   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569663   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569706   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569715   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.569716   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.570161   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570226   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570226   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570261   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570292   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570527   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570568   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570730   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570756   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570834   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570869   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.571512   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.571597   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.571664   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0725 17:30:26.572127   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.572167   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.572764   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.572801   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.579403   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.579453   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.579487   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0725 17:30:26.579601   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.579827   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.580037   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.580051   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.580163   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.580173   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.580514   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.580527   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.580738   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.581186   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.581208   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.581791   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.581807   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.581996   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.582355   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.582387   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.582417   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.582636   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.583052   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.583072   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.589530   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0725 17:30:26.590052   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.590614   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:26.590723   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.590746   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.591104   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.591256   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.593066   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.593414   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0725 17:30:26.595378   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0725 17:30:26.596468   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:26.597772   14037 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 17:30:26.597795   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0725 17:30:26.597815   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.597885   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0725 17:30:26.599052   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0725 17:30:26.599763   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.600067   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0725 17:30:26.600503   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.600519   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.601077   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.601194   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0725 17:30:26.601334   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.601613   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.601701   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.601716   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.602029   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.602064   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.602241   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.602256   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0725 17:30:26.602263   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.602246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.602469   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.602574   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.602665   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.602719   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.602963   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.604251   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0725 17:30:26.604663   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.605056   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:26.605072   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:26.605217   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:26.605229   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:26.605238   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:26.605247   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:26.605248   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:26.605390   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:26.605404   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	W0725 17:30:26.605475   14037 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0725 17:30:26.606242   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0725 17:30:26.607241   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0725 17:30:26.608265   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0725 17:30:26.609200   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0725 17:30:26.609214   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0725 17:30:26.609234   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.609498   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
	I0725 17:30:26.609521   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0725 17:30:26.610014   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.610124   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.610654   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.610672   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.611055   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.611280   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.612285   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.612302   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.612393   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.612412   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.612431   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.612892   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.613119   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.613177   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.613401   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.613632   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.613902   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.614397   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0725 17:30:26.614560   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.614860   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.615495   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.615513   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.616164   14037 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0725 17:30:26.616621   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.617258   14037 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 17:30:26.617272   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0725 17:30:26.617285   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.617610   14037 addons.go:234] Setting addon default-storageclass=true in "addons-377932"
	I0725 17:30:26.617656   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.618031   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.618055   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.618344   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.620100   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0725 17:30:26.620575   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.620673   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.620755   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.621015   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.621022   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.621038   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.621506   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.621524   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.621760   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.621917   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.621933   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.622214   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.622232   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.622939   14037 out.go:177]   - Using image docker.io/registry:2.8.3
	I0725 17:30:26.624062   14037 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0725 17:30:26.625203   14037 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0725 17:30:26.625222   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0725 17:30:26.625238   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.626513   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0725 17:30:26.627340   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.628401   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.628425   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.628797   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.628892   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.629239   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.629326   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.629637   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.629663   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.629872   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.630042   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.630211   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.630353   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.632272   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0725 17:30:26.632843   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.633333   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.633349   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.633723   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.634279   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.634314   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.636459   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0725 17:30:26.636857   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.637702   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.637724   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.638019   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.638523   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.638556   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.645196   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0725 17:30:26.645810   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.646332   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.646351   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.646658   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.646825   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.648589   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.650609   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0725 17:30:26.651568   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0725 17:30:26.651998   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.652079   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0725 17:30:26.652089   14037 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0725 17:30:26.652107   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.652632   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.652648   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.653210   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.653401   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.655783   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.656034   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0725 17:30:26.656160   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.656533   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.656754   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.656783   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.656977   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.657184   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.657197   14037 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0725 17:30:26.657204   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.657215   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.657421   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.657532   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.657582   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.658233   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.658273   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.658521   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:30:26.658543   14037 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:30:26.658560   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.661187   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0725 17:30:26.662231   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.662522   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.662629   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.662644   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.662677   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.663275   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.663293   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.663354   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.663742   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.663811   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.664054   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.664103   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0725 17:30:26.664109   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.664317   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0725 17:30:26.664463   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.665617   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.665967   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.666653   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.666670   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.667075   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.667368   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.667888   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0725 17:30:26.667911   14037 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0725 17:30:26.668359   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.668799   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.668822   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.669242   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.669290   14037 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 17:30:26.669308   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0725 17:30:26.669328   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.669294   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.669442   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.670265   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.670286   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.670642   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.670802   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.671021   14037 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0725 17:30:26.672215   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0725 17:30:26.672231   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0725 17:30:26.672246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.672245   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.673555   14037 out.go:177]   - Using image docker.io/busybox:stable
	I0725 17:30:26.674819   14037 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0725 17:30:26.675733   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.675800   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32979
	I0725 17:30:26.675945   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0725 17:30:26.676170   14037 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 17:30:26.676188   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0725 17:30:26.676204   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.676240   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.676338   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.676658   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.676677   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.676760   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.676983   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0725 17:30:26.677381   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.677524   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.677604   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.677736   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.677749   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.678332   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.678589   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.678972   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.679102   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.679347   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.679393   14037 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:30:26.679404   14037 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:30:26.679418   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.679477   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.679493   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.679600   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.679621   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.679766   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.680105   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.680252   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.680400   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.680488   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.680512   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.680911   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.680961   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.680980   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.681044   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.681208   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.681217   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.681283   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.681323   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.681369   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.681593   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.681764   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.682166   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.682315   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.682549   14037 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0725 17:30:26.682678   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.683084   14037 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0725 17:30:26.683080   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.683526   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.683296   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.683751   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.683906   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0725 17:30:26.683919   14037 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0725 17:30:26.683928   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.683932   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.683963   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.684136   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.684668   14037 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0725 17:30:26.684686   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0725 17:30:26.684701   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.685482   14037 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0725 17:30:26.686582   14037 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0725 17:30:26.686597   14037 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0725 17:30:26.686615   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.687408   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688142   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.688161   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688211   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688385   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.688676   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.688674   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.688729   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688834   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.688984   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.688985   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.689138   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.689254   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.689384   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.689493   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.689814   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.689836   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.690029   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.690188   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.690321   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.690462   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	W0725 17:30:26.693129   14037 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47776->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.693157   14037 retry.go:31] will retry after 359.642328ms: ssh: handshake failed: read tcp 192.168.39.1:47776->192.168.39.150:22: read: connection reset by peer
	W0725 17:30:26.693221   14037 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47780->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.693234   14037 retry.go:31] will retry after 239.250865ms: ssh: handshake failed: read tcp 192.168.39.1:47780->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.695154   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0725 17:30:26.695511   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.696000   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.696018   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.696297   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.696542   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.697773   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.699516   14037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:30:26.700751   14037 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:30:26.700766   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:30:26.700783   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.703483   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.703880   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.703899   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.704053   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.704223   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.704372   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.704490   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.856420   14037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:30:26.856853   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:30:26.942739   14037 node_ready.go:35] waiting up to 6m0s for node "addons-377932" to be "Ready" ...
	I0725 17:30:26.950391   14037 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0725 17:30:26.950418   14037 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0725 17:30:26.991959   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:30:26.994415   14037 node_ready.go:49] node "addons-377932" has status "Ready":"True"
	I0725 17:30:26.994435   14037 node_ready.go:38] duration metric: took 51.673222ms for node "addons-377932" to be "Ready" ...
	I0725 17:30:26.994445   14037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:30:27.024358   14037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.027758   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0725 17:30:27.027777   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0725 17:30:27.059861   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 17:30:27.073551   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 17:30:27.081510   14037 pod_ready.go:92] pod "etcd-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.081529   14037 pod_ready.go:81] duration metric: took 57.142659ms for pod "etcd-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.081538   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.082937   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:30:27.094301   14037 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0725 17:30:27.094330   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0725 17:30:27.109948   14037 pod_ready.go:92] pod "kube-apiserver-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.109965   14037 pod_ready.go:81] duration metric: took 28.42115ms for pod "kube-apiserver-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.109975   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.127627   14037 pod_ready.go:92] pod "kube-controller-manager-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.127647   14037 pod_ready.go:81] duration metric: took 17.665924ms for pod "kube-controller-manager-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.127656   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvfsq" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.218026   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 17:30:27.223844   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:30:27.223862   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0725 17:30:27.277644   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0725 17:30:27.277666   14037 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0725 17:30:27.280873   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0725 17:30:27.290738   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0725 17:30:27.290763   14037 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0725 17:30:27.299376   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 17:30:27.321019   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0725 17:30:27.321046   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0725 17:30:27.330013   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0725 17:30:27.330032   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0725 17:30:27.380008   14037 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0725 17:30:27.380029   14037 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0725 17:30:27.398574   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0725 17:30:27.398600   14037 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0725 17:30:27.457196   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0725 17:30:27.457223   14037 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0725 17:30:27.458617   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:30:27.458639   14037 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:30:27.467756   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0725 17:30:27.467777   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0725 17:30:27.472690   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0725 17:30:27.472717   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0725 17:30:27.521353   14037 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0725 17:30:27.521375   14037 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0725 17:30:27.568449   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0725 17:30:27.585481   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0725 17:30:27.585508   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0725 17:30:27.587271   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0725 17:30:27.587292   14037 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0725 17:30:27.593926   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0725 17:30:27.629117   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:30:27.629136   14037 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:30:27.656673   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0725 17:30:27.656699   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0725 17:30:27.700184   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0725 17:30:27.700207   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0725 17:30:27.736208   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0725 17:30:27.736235   14037 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0725 17:30:27.751771   14037 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0725 17:30:27.751797   14037 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0725 17:30:27.818829   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0725 17:30:27.818852   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0725 17:30:27.879044   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:30:27.905665   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0725 17:30:27.905690   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0725 17:30:27.967943   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0725 17:30:27.983486   14037 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:27.983509   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0725 17:30:28.042366   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0725 17:30:28.042397   14037 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0725 17:30:28.066816   14037 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0725 17:30:28.066851   14037 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0725 17:30:28.158008   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:28.209655   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0725 17:30:28.209680   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0725 17:30:28.265735   14037 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0725 17:30:28.265761   14037 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0725 17:30:28.496052   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0725 17:30:28.496078   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0725 17:30:28.542715   14037 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0725 17:30:28.542740   14037 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0725 17:30:28.681084   14037 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.824193331s)
	I0725 17:30:28.681114   14037 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 17:30:28.681119   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68913118s)
	I0725 17:30:28.681163   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681179   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681202   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.62131223s)
	I0725 17:30:28.681258   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681276   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681443   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681456   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.681465   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681472   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681572   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.681611   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681628   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.681645   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681667   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681733   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681744   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.682202   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.682212   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.682221   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.708827   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.708849   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.709134   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.709174   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.709183   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.728193   14037 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 17:30:28.728218   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0725 17:30:28.918382   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 17:30:28.918408   14037 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0725 17:30:28.935308   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 17:30:29.132975   14037 pod_ready.go:102] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"False"
	I0725 17:30:29.137578   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 17:30:29.185384   14037 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377932" context rescaled to 1 replicas
	I0725 17:30:31.191287   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.11769951s)
	I0725 17:30:31.191359   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.191372   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.191710   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:31.191728   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.191744   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.191759   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.191772   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.192087   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.192103   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.203430   14037 pod_ready.go:102] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"False"
	I0725 17:30:31.205948   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.122986006s)
	I0725 17:30:31.205986   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.205999   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.206224   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.206238   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.206248   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.206256   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.206541   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.206561   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.320171   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.320196   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.320475   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.320493   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.779423   14037 pod_ready.go:92] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:31.779444   14037 pod_ready.go:81] duration metric: took 4.651781743s for pod "kube-proxy-lvfsq" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.779453   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.881899   14037 pod_ready.go:92] pod "kube-scheduler-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:31.881925   14037 pod_ready.go:81] duration metric: took 102.463485ms for pod "kube-scheduler-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.881937   14037 pod_ready.go:38] duration metric: took 4.887481521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:30:31.881955   14037 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:30:31.882010   14037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:30:33.678748   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0725 17:30:33.678785   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:33.682101   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.682513   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:33.682539   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.682761   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:33.683112   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:33.683325   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:33.683500   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:33.849914   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0725 17:30:33.899631   14037 addons.go:234] Setting addon gcp-auth=true in "addons-377932"
	I0725 17:30:33.899687   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:33.899995   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:33.900023   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:33.915048   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I0725 17:30:33.915478   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:33.915949   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:33.915967   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:33.916283   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:33.916955   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:33.917027   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:33.931543   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0725 17:30:33.931997   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:33.932485   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:33.932508   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:33.932821   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:33.932978   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:33.934511   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:33.934736   14037 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0725 17:30:33.934758   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:33.937508   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.937905   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:33.937930   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.938071   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:33.938222   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:33.938363   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:33.938550   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:34.798399   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.580329833s)
	I0725 17:30:34.798450   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798454   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.517552437s)
	I0725 17:30:34.798494   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798516   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798463   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798499   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.499101707s)
	I0725 17:30:34.798938   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798954   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798977   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.20503125s)
	I0725 17:30:34.798884   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.230390546s)
	I0725 17:30:34.799006   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799015   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799019   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799038   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799528   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.920445673s)
	I0725 17:30:34.799565   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799581   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799916   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.831938722s)
	I0725 17:30:34.799939   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799955   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.800133   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.642087812s)
	W0725 17:30:34.800166   14037 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 17:30:34.800186   14037 retry.go:31] will retry after 317.586915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 17:30:34.800290   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.864940169s)
	I0725 17:30:34.800305   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.800342   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801588   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801635   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801649   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801664   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801664   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801675   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801685   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801692   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801699   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801705   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801740   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801751   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801759   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801766   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801773   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801819   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801839   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801846   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801853   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801865   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801896   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801904   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801955   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801985   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801993   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.802024   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.802033   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.802303   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.802343   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.802357   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801685   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803047   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803066   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803102   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803109   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803118   14037 addons.go:475] Verifying addon ingress=true in "addons-377932"
	I0725 17:30:34.803154   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803175   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803250   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803260   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803268   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.803275   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.803190   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803177   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803204   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803749   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803764   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.803773   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.803796   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803829   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803835   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803987   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.804032   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.804041   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.804052   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.804065   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.804383   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.804755   14037 out.go:177] * Verifying ingress addon...
	I0725 17:30:34.805513   14037 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377932 service yakd-dashboard -n yakd-dashboard
	
	I0725 17:30:34.806791   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.806835   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.806851   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.806859   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.806859   14037 addons.go:475] Verifying addon registry=true in "addons-377932"
	I0725 17:30:34.806902   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.806910   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.806917   14037 addons.go:475] Verifying addon metrics-server=true in "addons-377932"
	I0725 17:30:34.807740   14037 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0725 17:30:34.809187   14037 out.go:177] * Verifying registry addon...
	I0725 17:30:34.811567   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0725 17:30:34.833370   14037 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0725 17:30:34.833399   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:34.839153   14037 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0725 17:30:34.839173   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.118558   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:35.348369   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.348538   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:35.695836   14037 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.813802254s)
	I0725 17:30:35.695872   14037 api_server.go:72] duration metric: took 9.186876551s to wait for apiserver process to appear ...
	I0725 17:30:35.695881   14037 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:30:35.695896   14037 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.761146615s)
	I0725 17:30:35.695902   14037 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0725 17:30:35.695839   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.558218438s)
	I0725 17:30:35.696010   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:35.696033   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:35.696367   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:35.696464   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:35.696547   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:35.696561   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:35.696584   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:35.696798   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:35.696816   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:35.696828   14037 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-377932"
	I0725 17:30:35.697694   14037 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0725 17:30:35.698681   14037 out.go:177] * Verifying csi-hostpath-driver addon...
	I0725 17:30:35.700171   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:35.700856   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0725 17:30:35.701234   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0725 17:30:35.701285   14037 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0725 17:30:35.708636   14037 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0725 17:30:35.709948   14037 api_server.go:141] control plane version: v1.30.3
	I0725 17:30:35.709967   14037 api_server.go:131] duration metric: took 14.080783ms to wait for apiserver health ...
	I0725 17:30:35.709976   14037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:30:35.763090   14037 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0725 17:30:35.763114   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:35.766982   14037 system_pods.go:59] 19 kube-system pods found
	I0725 17:30:35.767015   14037 system_pods.go:61] "coredns-7db6d8ff4d-88xvs" [7b1bde6a-0813-443b-9380-b00b7d28e60b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.767022   14037 system_pods.go:61] "coredns-7db6d8ff4d-d9w47" [bdce9c77-c60e-470b-bcf9-92bc0457b00c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.767032   14037 system_pods.go:61] "csi-hostpath-attacher-0" [1dc5f394-e7fe-42cc-837c-dcc2bc950f3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 17:30:35.767036   14037 system_pods.go:61] "csi-hostpath-resizer-0" [5690ce6b-1620-4e7b-a4c2-ba55aa2719d5] Pending
	I0725 17:30:35.767045   14037 system_pods.go:61] "csi-hostpathplugin-sp25x" [fc9e8e5b-9eea-48b0-ab93-a41dd47ba51b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 17:30:35.767049   14037 system_pods.go:61] "etcd-addons-377932" [cb332b46-cc93-4dac-b792-7af6ecb19e19] Running
	I0725 17:30:35.767055   14037 system_pods.go:61] "kube-apiserver-addons-377932" [a89d3695-faba-4fd1-8d6e-44636c441dd3] Running
	I0725 17:30:35.767058   14037 system_pods.go:61] "kube-controller-manager-addons-377932" [25b60c94-0c25-420b-bab2-85da901959c6] Running
	I0725 17:30:35.767063   14037 system_pods.go:61] "kube-ingress-dns-minikube" [edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0725 17:30:35.767067   14037 system_pods.go:61] "kube-proxy-lvfsq" [064711fa-5c88-45bd-9b18-e748ebeae659] Running
	I0725 17:30:35.767070   14037 system_pods.go:61] "kube-scheduler-addons-377932" [791f79f6-b25a-46df-8b0e-ac3a1aeeb699] Running
	I0725 17:30:35.767075   14037 system_pods.go:61] "metrics-server-c59844bb4-nn7lw" [4b69ce7d-1c27-46dc-8f29-5bab086365eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:30:35.767082   14037 system_pods.go:61] "nvidia-device-plugin-daemonset-g4wdw" [33f0f28c-f9cb-4e40-8b85-364dac249c2b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0725 17:30:35.767098   14037 system_pods.go:61] "registry-656c9c8d9c-rkw7r" [c0a7b843-4a5e-4647-b7cb-7dd968ac91e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 17:30:35.767107   14037 system_pods.go:61] "registry-proxy-d8vdg" [83703257-9ba2-4749-b11e-965f7b8f4403] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 17:30:35.767114   14037 system_pods.go:61] "snapshot-controller-745499f584-4nzhc" [10ddb74f-e7a9-4a1a-a18c-a81520d43966] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.767121   14037 system_pods.go:61] "snapshot-controller-745499f584-vdmrk" [7268b907-7d32-4b96-a2fd-7866d0ef5bc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.767124   14037 system_pods.go:61] "storage-provisioner" [9e60203d-a803-41b0-9d64-802cd79cf088] Running
	I0725 17:30:35.767129   14037 system_pods.go:61] "tiller-deploy-6677d64bcd-gzwvc" [404a7d43-869c-4137-b5a9-e4f4ce531f65] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0725 17:30:35.767136   14037 system_pods.go:74] duration metric: took 57.154189ms to wait for pod list to return data ...
	I0725 17:30:35.767146   14037 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:30:35.776100   14037 default_sa.go:45] found service account: "default"
	I0725 17:30:35.776129   14037 default_sa.go:55] duration metric: took 8.976645ms for default service account to be created ...
	I0725 17:30:35.776143   14037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:30:35.793249   14037 system_pods.go:86] 19 kube-system pods found
	I0725 17:30:35.793276   14037 system_pods.go:89] "coredns-7db6d8ff4d-88xvs" [7b1bde6a-0813-443b-9380-b00b7d28e60b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.793285   14037 system_pods.go:89] "coredns-7db6d8ff4d-d9w47" [bdce9c77-c60e-470b-bcf9-92bc0457b00c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.793292   14037 system_pods.go:89] "csi-hostpath-attacher-0" [1dc5f394-e7fe-42cc-837c-dcc2bc950f3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 17:30:35.793299   14037 system_pods.go:89] "csi-hostpath-resizer-0" [5690ce6b-1620-4e7b-a4c2-ba55aa2719d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0725 17:30:35.793305   14037 system_pods.go:89] "csi-hostpathplugin-sp25x" [fc9e8e5b-9eea-48b0-ab93-a41dd47ba51b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 17:30:35.793311   14037 system_pods.go:89] "etcd-addons-377932" [cb332b46-cc93-4dac-b792-7af6ecb19e19] Running
	I0725 17:30:35.793316   14037 system_pods.go:89] "kube-apiserver-addons-377932" [a89d3695-faba-4fd1-8d6e-44636c441dd3] Running
	I0725 17:30:35.793322   14037 system_pods.go:89] "kube-controller-manager-addons-377932" [25b60c94-0c25-420b-bab2-85da901959c6] Running
	I0725 17:30:35.793331   14037 system_pods.go:89] "kube-ingress-dns-minikube" [edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0725 17:30:35.793337   14037 system_pods.go:89] "kube-proxy-lvfsq" [064711fa-5c88-45bd-9b18-e748ebeae659] Running
	I0725 17:30:35.793344   14037 system_pods.go:89] "kube-scheduler-addons-377932" [791f79f6-b25a-46df-8b0e-ac3a1aeeb699] Running
	I0725 17:30:35.793353   14037 system_pods.go:89] "metrics-server-c59844bb4-nn7lw" [4b69ce7d-1c27-46dc-8f29-5bab086365eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:30:35.793366   14037 system_pods.go:89] "nvidia-device-plugin-daemonset-g4wdw" [33f0f28c-f9cb-4e40-8b85-364dac249c2b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0725 17:30:35.793372   14037 system_pods.go:89] "registry-656c9c8d9c-rkw7r" [c0a7b843-4a5e-4647-b7cb-7dd968ac91e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 17:30:35.793381   14037 system_pods.go:89] "registry-proxy-d8vdg" [83703257-9ba2-4749-b11e-965f7b8f4403] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 17:30:35.793415   14037 system_pods.go:89] "snapshot-controller-745499f584-4nzhc" [10ddb74f-e7a9-4a1a-a18c-a81520d43966] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.793430   14037 system_pods.go:89] "snapshot-controller-745499f584-vdmrk" [7268b907-7d32-4b96-a2fd-7866d0ef5bc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.793436   14037 system_pods.go:89] "storage-provisioner" [9e60203d-a803-41b0-9d64-802cd79cf088] Running
	I0725 17:30:35.793447   14037 system_pods.go:89] "tiller-deploy-6677d64bcd-gzwvc" [404a7d43-869c-4137-b5a9-e4f4ce531f65] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0725 17:30:35.793458   14037 system_pods.go:126] duration metric: took 17.30932ms to wait for k8s-apps to be running ...
	I0725 17:30:35.793470   14037 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:30:35.793514   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:30:35.820677   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.822463   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:35.858719   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0725 17:30:35.858746   14037 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0725 17:30:35.941419   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 17:30:35.941448   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0725 17:30:35.996382   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 17:30:36.206578   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:36.312363   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:36.315590   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:36.707078   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:36.811961   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:36.815786   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:37.209856   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.091248009s)
	I0725 17:30:37.209912   14037 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.416371159s)
	I0725 17:30:37.209938   14037 system_svc.go:56] duration metric: took 1.416464158s WaitForService to wait for kubelet
	I0725 17:30:37.209952   14037 kubeadm.go:582] duration metric: took 10.700953135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:30:37.209977   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.213559275s)
	I0725 17:30:37.209917   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.210002   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210004   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.209983   14037 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:30:37.210017   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210375   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.210398   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.210424   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.210441   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.210454   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.210464   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210660   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.210713   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.210713   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.211803   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.211823   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.211838   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.211846   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.212051   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.212066   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.212412   14037 addons.go:475] Verifying addon gcp-auth=true in "addons-377932"
	I0725 17:30:37.214663   14037 out.go:177] * Verifying gcp-auth addon...
	I0725 17:30:37.216795   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0725 17:30:37.243009   14037 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0725 17:30:37.243029   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:37.243796   14037 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:30:37.243826   14037 node_conditions.go:123] node cpu capacity is 2
	I0725 17:30:37.243841   14037 node_conditions.go:105] duration metric: took 33.82153ms to run NodePressure ...
	I0725 17:30:37.243856   14037 start.go:241] waiting for startup goroutines ...
	I0725 17:30:37.244242   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:37.320625   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:37.342538   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:37.707972   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:37.719826   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:37.812265   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:37.815699   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:38.206819   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:38.219960   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:38.313801   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:38.324559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:38.736980   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:38.737728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:38.811890   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:38.816631   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:39.207086   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:39.219616   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:39.312416   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:39.316569   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:39.708711   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:39.720666   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:39.812262   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:39.816415   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:40.205904   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:40.220486   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:40.312168   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:40.316140   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:40.706266   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:40.720254   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:40.811762   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:40.815166   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:41.206539   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:41.220492   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:41.312312   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:41.316452   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:41.707006   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:41.720428   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:41.811945   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:41.815559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:42.206882   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:42.220009   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:42.311963   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:42.318067   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:42.706796   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:42.720838   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:42.812600   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:42.815106   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:43.207783   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:43.220029   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:43.311718   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:43.315243   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:43.706349   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:43.720490   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:43.812170   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:43.815716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:44.207314   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:44.221629   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:44.312638   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:44.315795   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:44.707061   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:44.721185   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:44.811926   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:44.815586   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:45.206588   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:45.220553   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:45.312976   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:45.315941   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:45.706420   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:45.720459   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:45.811803   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:45.816123   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:46.206796   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:46.220632   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:46.312549   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:46.315722   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:46.707514   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:46.720843   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:46.813065   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:46.817743   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:47.206065   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:47.220219   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:47.311527   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:47.315270   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:47.708354   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:47.721037   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:47.811470   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:47.815720   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:48.206839   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:48.219835   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:48.313038   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:48.315599   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:48.706824   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:48.720200   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:48.811668   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:48.815131   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:49.205702   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:49.219986   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:49.311568   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:49.315456   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:49.706646   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:49.719697   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:49.812997   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:49.816380   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:50.206727   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:50.219967   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:50.311711   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:50.315196   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:50.707603   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:50.720103   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:50.812073   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:50.816435   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:51.206745   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:51.220252   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:51.312257   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:51.315943   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:51.710097   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:51.727851   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:51.812675   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:51.817856   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:52.207044   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:52.219897   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:52.312665   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:52.317452   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:52.706608   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:52.720869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:52.812127   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:52.815353   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:53.207219   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:53.219876   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:53.312543   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:53.315133   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:53.706611   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:53.721096   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:53.811616   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:53.814869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:54.206306   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:54.221532   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:54.311651   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:54.314961   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:54.706383   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:54.720590   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:54.812048   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:54.815460   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:55.206206   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:55.220169   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:55.311659   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:55.315730   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:55.707166   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:55.720215   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:55.812090   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:55.816701   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:56.206236   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:56.220234   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:56.316133   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:56.324665   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:56.707709   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:56.721098   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:56.811890   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:56.815732   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:57.205963   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:57.219903   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:57.312966   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:57.316415   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:57.706074   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:57.720280   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:57.812123   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:57.815466   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:58.206259   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:58.220454   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:58.312352   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:58.315684   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:58.707036   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:58.720180   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:58.811537   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:58.815672   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:59.206731   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:59.220823   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:59.312598   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:59.315024   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:59.706048   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:59.720100   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:59.811853   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:59.816290   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:00.206109   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:00.219876   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:00.312234   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:00.315637   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:00.708465   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:00.720954   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:00.812277   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:00.821816   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:01.209706   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:01.219697   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:01.311767   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:01.315728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:01.707729   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:01.719810   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:01.814221   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:01.817577   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:02.208802   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:02.219441   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:02.317514   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:02.317607   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:02.707118   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:02.720106   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:02.812355   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:02.817262   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.207339   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:03.220378   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:03.314117   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:03.316585   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.705898   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:03.719740   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:03.817167   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.817841   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.205666   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:04.219649   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:04.312153   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.316167   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:04.706210   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:04.720503   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:04.811872   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.815916   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:05.206469   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:05.220028   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:05.312364   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:05.315052   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:05.706560   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:05.720571   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:05.811945   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:05.815383   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.206601   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:06.219380   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:06.313343   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:06.333180   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.909655   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:06.910319   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.910653   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:06.910699   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.208814   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:07.222577   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:07.312511   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.316019   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:07.706837   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:07.719918   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:07.816013   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.821195   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:08.206209   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:08.220278   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:08.311978   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:08.315666   14037 kapi.go:107] duration metric: took 33.504098676s to wait for kubernetes.io/minikube-addons=registry ...
	I0725 17:31:08.707992   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:08.720442   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:08.812057   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:09.206578   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:09.220678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:09.312579   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:09.706135   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:09.720250   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:09.811671   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:10.206222   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:10.220368   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:10.312309   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:10.706495   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:10.720792   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:10.812696   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:11.206559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:11.221003   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:11.312468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:11.706687   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:11.720985   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:11.812565   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:12.205946   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:12.220095   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:12.311570   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:12.706316   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:12.720385   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:12.811956   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:13.206263   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:13.220199   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:13.311857   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:13.707929   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:13.719947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:13.812799   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:14.206613   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:14.220817   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:14.312399   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:14.705678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:14.719868   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:14.812670   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:15.206299   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:15.220566   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:15.312340   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:15.707639   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:15.720862   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:15.812488   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:16.206648   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:16.220122   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:16.312402   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:16.706683   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:16.719650   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:16.812236   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:17.206438   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:17.221035   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:17.311614   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:17.711149   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:17.721716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:17.812357   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:18.219299   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:18.223925   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:18.312382   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:18.705817   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:18.720019   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:18.814450   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:19.206185   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:19.220756   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:19.312907   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:19.706939   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:19.720233   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:19.812031   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:20.206821   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:20.221042   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:20.313405   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:20.706376   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:20.720742   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:20.812972   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:21.207005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:21.219833   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:21.312298   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:21.706807   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:21.720757   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:21.813040   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:22.206634   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:22.220276   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:22.316518   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:22.708504   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:22.720463   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:22.813557   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:23.206692   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:23.221240   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:23.312487   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:23.707352   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:23.724066   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:23.812221   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:24.207062   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:24.220733   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:24.555951   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:24.707216   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:24.720292   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:24.812178   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:25.206124   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:25.220463   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:25.311928   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:25.706572   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:25.720700   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:25.812549   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:26.206621   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:26.219445   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:26.312490   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:26.708892   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:26.720011   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:26.813072   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:27.211617   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:27.220766   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:27.312285   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:27.707498   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:27.722611   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:27.812315   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:28.209273   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:28.220999   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:28.311550   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:28.706646   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:28.720682   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:28.813651   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:29.210902   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:29.222191   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:29.312166   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:29.709202   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:29.720947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:29.812735   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:30.206678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:30.221433   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:30.313304   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:30.707637   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:30.723017   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:30.812213   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:31.211245   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:31.220189   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:31.311466   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:31.706704   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:31.719947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:31.812571   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:32.206963   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:32.220110   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:32.703982   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:32.713008   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:32.725735   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:32.815590   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:33.206137   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:33.219984   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:33.311468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:33.706835   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:33.719825   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:33.812236   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:34.205763   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:34.220144   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:34.311835   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:34.706688   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:34.720728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:34.812211   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:35.206024   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:35.220754   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:35.312504   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:35.705962   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:35.719996   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:35.811808   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:36.208186   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:36.219634   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:36.312667   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:36.716138   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:36.721785   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:36.816769   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:37.209998   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:37.222740   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:37.313629   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:37.706281   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:37.720781   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:37.812025   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:38.206812   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:38.220713   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:38.313175   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:38.706925   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:38.720092   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:38.811795   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:39.211283   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:39.226714   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:39.312552   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:39.710203   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:39.719943   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:39.812751   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:40.206005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:40.220610   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:40.312390   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:40.808487   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:40.813797   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:40.814468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:41.206090   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:41.221008   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:41.311831   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:41.706728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:41.720669   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:41.813801   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:42.206187   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:42.220718   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:42.320241   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:42.707527   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:42.721830   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:42.812750   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:43.205885   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:43.220056   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:43.313715   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:43.705756   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:43.719741   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:43.812189   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:44.206668   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:44.219812   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:44.312611   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:44.706400   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:44.720987   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:44.812704   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:45.434667   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:45.435264   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:45.438921   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:45.706255   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:45.720549   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:45.812336   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:46.207327   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:46.219901   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:46.312842   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:46.705809   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:46.720260   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:46.811896   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:47.207318   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:47.220408   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:47.312497   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:47.707177   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:47.720889   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:47.813116   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:48.206728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:48.221159   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:48.311934   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:48.712716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:48.720448   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:48.813339   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:49.207279   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:49.220869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:49.312715   14037 kapi.go:107] duration metric: took 1m14.504972311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0725 17:31:49.706894   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:49.720091   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:50.207511   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:50.224157   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:50.705899   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:50.720070   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:51.207404   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:51.220681   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:51.708289   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:51.722737   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:52.206005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:52.221183   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:52.706759   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:52.720169   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:53.206356   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:53.226919   14037 kapi.go:107] duration metric: took 1m16.010122961s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0725 17:31:53.228898   14037 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-377932 cluster.
	I0725 17:31:53.230496   14037 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0725 17:31:53.231987   14037 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0725 17:31:53.714096   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:54.206152   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:54.707947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:55.206743   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:55.707190   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:56.207973   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:56.706530   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:57.206751   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:57.706346   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:58.205996   14037 kapi.go:107] duration metric: took 1m22.505136514s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0725 17:31:58.207793   14037 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, ingress-dns, helm-tiller, cloud-spanner, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0725 17:31:58.209159   14037 addons.go:510] duration metric: took 1m31.70011609s for enable addons: enabled=[nvidia-device-plugin default-storageclass storage-provisioner storage-provisioner-rancher inspektor-gadget ingress-dns helm-tiller cloud-spanner yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0725 17:31:58.209208   14037 start.go:246] waiting for cluster config update ...
	I0725 17:31:58.209230   14037 start.go:255] writing updated cluster config ...
	I0725 17:31:58.209488   14037 ssh_runner.go:195] Run: rm -f paused
	I0725 17:31:58.260030   14037 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 17:31:58.261629   14037 out.go:177] * Done! kubectl is now configured to use "addons-377932" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.251879811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721928911251854729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7f7adea-50c2-489c-a36a-1590b3269de0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.252437549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11052540-b6a9-4409-8f10-0e5596f865ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.252507945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11052540-b6a9-4409-8f10-0e5596f865ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.252820075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b81445e13cad64a3211c0ad0ce4b0e6d6e232e247547b68dc9ec46c436aa5b2,PodSandboxId:df2ba68cae7f63a4f3bf0dcfdccafc884d7689c165254c4848233252f54427bd,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928696650597289,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g55cc,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: fea6cd12-8bbe-4b4c-9019-69e94b6305cc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f246abd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b550d33b6fccc0b31cc4b41a3806ec0bdb3bcf329c0dd7b236eefee161007d4,PodSandboxId:810ef9f9940b29dfa034e50ce05772d34f39d7731c08ad970031a628c4288321,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928694967620545,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqjsl,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 57501751-2e35-47e2-8ecf-76ac61e45cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 27b33b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.
container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cf
cd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071d
d8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721928607253278403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11052540-b6a9-4409-8f10-0e5596f865ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.295227318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bf3d2f5-b896-475e-8141-7a9b8ed8327d name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.295311229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bf3d2f5-b896-475e-8141-7a9b8ed8327d name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.297800458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8efa2be0-a764-41f0-b98a-d8ca42a82b26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.299101610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721928911299061229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8efa2be0-a764-41f0-b98a-d8ca42a82b26 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.299634439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6981944-aaf1-44fc-bef9-8fe38f352b8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.299703450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6981944-aaf1-44fc-bef9-8fe38f352b8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.300050422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b81445e13cad64a3211c0ad0ce4b0e6d6e232e247547b68dc9ec46c436aa5b2,PodSandboxId:df2ba68cae7f63a4f3bf0dcfdccafc884d7689c165254c4848233252f54427bd,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928696650597289,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g55cc,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: fea6cd12-8bbe-4b4c-9019-69e94b6305cc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f246abd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b550d33b6fccc0b31cc4b41a3806ec0bdb3bcf329c0dd7b236eefee161007d4,PodSandboxId:810ef9f9940b29dfa034e50ce05772d34f39d7731c08ad970031a628c4288321,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928694967620545,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqjsl,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 57501751-2e35-47e2-8ecf-76ac61e45cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 27b33b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.
container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cf
cd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071d
d8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721928607253278403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6981944-aaf1-44fc-bef9-8fe38f352b8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.335059655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=716312e5-e8ff-460c-a4e2-343b09654c5d name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.335145060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=716312e5-e8ff-460c-a4e2-343b09654c5d name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.336275713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddd3f1ce-0e3f-4cbc-ab79-0507a706c9c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.337480244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721928911337452599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddd3f1ce-0e3f-4cbc-ab79-0507a706c9c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.338144605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de0f60f-3255-4076-92c3-b34202c977cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.338212342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de0f60f-3255-4076-92c3-b34202c977cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.338527490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b81445e13cad64a3211c0ad0ce4b0e6d6e232e247547b68dc9ec46c436aa5b2,PodSandboxId:df2ba68cae7f63a4f3bf0dcfdccafc884d7689c165254c4848233252f54427bd,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928696650597289,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g55cc,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: fea6cd12-8bbe-4b4c-9019-69e94b6305cc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f246abd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b550d33b6fccc0b31cc4b41a3806ec0bdb3bcf329c0dd7b236eefee161007d4,PodSandboxId:810ef9f9940b29dfa034e50ce05772d34f39d7731c08ad970031a628c4288321,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928694967620545,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqjsl,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 57501751-2e35-47e2-8ecf-76ac61e45cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 27b33b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.
container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cf
cd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071d
d8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721928607253278403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6de0f60f-3255-4076-92c3-b34202c977cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.371051595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c205e6a0-821f-4c3e-bd51-e62ec91c8446 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.371133434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c205e6a0-821f-4c3e-bd51-e62ec91c8446 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.372467665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a37944f0-686b-441b-ad7c-fdb8340acd82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.374057244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721928911374029876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a37944f0-686b-441b-ad7c-fdb8340acd82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.374515507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecf535b3-6f12-4d96-9ec1-b70ff1284636 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.374579413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecf535b3-6f12-4d96-9ec1-b70ff1284636 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:35:11 addons-377932 crio[680]: time="2024-07-25 17:35:11.374876813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b81445e13cad64a3211c0ad0ce4b0e6d6e232e247547b68dc9ec46c436aa5b2,PodSandboxId:df2ba68cae7f63a4f3bf0dcfdccafc884d7689c165254c4848233252f54427bd,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928696650597289,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-g55cc,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: fea6cd12-8bbe-4b4c-9019-69e94b6305cc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f246abd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b550d33b6fccc0b31cc4b41a3806ec0bdb3bcf329c0dd7b236eefee161007d4,PodSandboxId:810ef9f9940b29dfa034e50ce05772d34f39d7731c08ad970031a628c4288321,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721928694967620545,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqjsl,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 57501751-2e35-47e2-8ecf-76ac61e45cc9,},Annotations:map[string]string{io.kubernetes.container.hash: 27b33b26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.
container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageR
ef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cf
cd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071d
d8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721928607253278403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecf535b3-6f12-4d96-9ec1-b70ff1284636 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	605d9f8973885       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   406b53b6aea6b       hello-world-app-6778b5fc9f-8zkzg
	54c341f933ddb       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   3f41aee5c6cd1       nginx
	ae50a4c55eb97       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   6a583c13e6c81       busybox
	6b81445e13cad       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   df2ba68cae7f6       ingress-nginx-admission-patch-g55cc
	6b550d33b6fcc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   810ef9f9940b2       ingress-nginx-admission-create-dqjsl
	96e028832ba2e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   01bcc628f927c       metrics-server-c59844bb4-nn7lw
	b60fb2bb6a1c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   d646775153f6b       storage-provisioner
	cf4e20ecc3a7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Exited              storage-provisioner       0                   d646775153f6b       storage-provisioner
	f3bcedefced06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   b08ab3dbd8f75       coredns-7db6d8ff4d-d9w47
	383275aa3c4dc       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   dcf515c23ca90       kube-proxy-lvfsq
	74106d9dcdfc7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   a0c0bd8e8111e       etcd-addons-377932
	a6dbfcd8215ac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             5 minutes ago       Running             kube-controller-manager   0                   9e2d5db7d289b       kube-controller-manager-addons-377932
	57d187294f4f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             5 minutes ago       Running             kube-scheduler            0                   afb1a6e40a036       kube-scheduler-addons-377932
	cbe8d24934c77       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             5 minutes ago       Running             kube-apiserver            0                   c5b9fae7f4ee5       kube-apiserver-addons-377932
	
	
	==> coredns [f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b] <==
	[INFO] 10.244.0.6:55901 - 27103 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167601s
	[INFO] 10.244.0.6:55554 - 46043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112123s
	[INFO] 10.244.0.6:55554 - 18117 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000214134s
	[INFO] 10.244.0.6:55789 - 36728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176063s
	[INFO] 10.244.0.6:55789 - 25975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000215718s
	[INFO] 10.244.0.6:43263 - 14339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000212073s
	[INFO] 10.244.0.6:43263 - 39681 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000205896s
	[INFO] 10.244.0.6:33956 - 63895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104753s
	[INFO] 10.244.0.6:33956 - 38802 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083122s
	[INFO] 10.244.0.6:40038 - 19046 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093401s
	[INFO] 10.244.0.6:40038 - 6756 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064036s
	[INFO] 10.244.0.6:36986 - 25565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031308s
	[INFO] 10.244.0.6:36986 - 23259 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046989s
	[INFO] 10.244.0.6:58056 - 39356 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009466s
	[INFO] 10.244.0.6:58056 - 14243 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000041422s
	[INFO] 10.244.0.22:43424 - 58041 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000400291s
	[INFO] 10.244.0.22:41194 - 1157 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142741s
	[INFO] 10.244.0.22:57305 - 32410 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100836s
	[INFO] 10.244.0.22:52169 - 65482 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097047s
	[INFO] 10.244.0.22:40228 - 47166 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104419s
	[INFO] 10.244.0.22:53036 - 19764 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143616s
	[INFO] 10.244.0.22:34328 - 24030 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000638244s
	[INFO] 10.244.0.22:35768 - 17654 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000902665s
	[INFO] 10.244.0.24:34710 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000380641s
	[INFO] 10.244.0.24:41806 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127074s
	
	
	==> describe nodes <==
	Name:               addons-377932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-377932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=addons-377932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_30_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377932
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:30:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377932
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:34:18 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:34:18 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:34:18 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:34:18 +0000   Thu, 25 Jul 2024 17:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    addons-377932
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad6651334941f8ab25b3dc98a618d4
	  System UUID:                a1ad6651-3349-41f8-ab25-b3dc98a618d4
	  Boot ID:                    25c5e1f3-b9c2-4564-b6e2-3d70d430654e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  default                     hello-world-app-6778b5fc9f-8zkzg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-d9w47                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m44s
	  kube-system                 etcd-addons-377932                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-apiserver-addons-377932             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-addons-377932    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-proxy-lvfsq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-scheduler-addons-377932             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 metrics-server-c59844bb4-nn7lw           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m40s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m38s  kube-proxy       
	  Normal  Starting                 4m59s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m58s  kubelet          Node addons-377932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s  kubelet          Node addons-377932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s  kubelet          Node addons-377932 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m58s  kubelet          Node addons-377932 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node addons-377932 event: Registered Node addons-377932 in Controller
	
	
	==> dmesg <==
	[Jul25 17:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.555816] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.071696] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.737151] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.017391] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.066267] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.753077] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.060538] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.518415] kauditd_printk_skb: 7 callbacks suppressed
	[Jul25 17:32] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.387617] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.768604] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.214825] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.369753] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.021562] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.027313] kauditd_printk_skb: 9 callbacks suppressed
	[Jul25 17:33] kauditd_printk_skb: 35 callbacks suppressed
	[ +20.972468] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.260723] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.556230] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.450480] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.510807] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.369221] kauditd_printk_skb: 13 callbacks suppressed
	[Jul25 17:35] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.610152] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d] <==
	{"level":"warn","ts":"2024-07-25T17:31:32.690351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:31:32.298102Z","time spent":"392.233139ms","remote":"127.0.0.1:44736","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-nn7lw.17e585017caf86f8\" "}
	{"level":"info","ts":"2024-07-25T17:31:32.689886Z","caller":"traceutil/trace.go:171","msg":"trace[762310865] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-799879c74f-dcg7m; range_end:; response_count:1; response_revision:998; }","duration":"177.667092ms","start":"2024-07-25T17:31:32.512207Z","end":"2024-07-25T17:31:32.689874Z","steps":["trace[762310865] 'agreement among raft nodes before linearized reading'  (duration: 175.932822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:40.794264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.557994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85761"}
	{"level":"info","ts":"2024-07-25T17:31:40.794397Z","caller":"traceutil/trace.go:171","msg":"trace[1325571677] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1066; }","duration":"102.721451ms","start":"2024-07-25T17:31:40.691665Z","end":"2024-07-25T17:31:40.794386Z","steps":["trace[1325571677] 'range keys from in-memory index tree'  (duration: 102.383306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.415532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.092629ms","expected-duration":"100ms","prefix":"","request":"header:<ID:345529135170824684 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-377932\" mod_revision:1012 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-377932\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-377932\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T17:31:45.415683Z","caller":"traceutil/trace.go:171","msg":"trace[1704449633] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1135; }","duration":"224.525723ms","start":"2024-07-25T17:31:45.191143Z","end":"2024-07-25T17:31:45.415669Z","steps":["trace[1704449633] 'read index received'  (duration: 100.150917ms)","trace[1704449633] 'applied index is now lower than readState.Index'  (duration: 124.373688ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T17:31:45.415962Z","caller":"traceutil/trace.go:171","msg":"trace[1417700302] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"484.086363ms","start":"2024-07-25T17:31:44.931859Z","end":"2024-07-25T17:31:45.415945Z","steps":["trace[1417700302] 'process raft request'  (duration: 359.485918ms)","trace[1417700302] 'compare'  (duration: 123.849162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T17:31:45.416184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:31:44.93184Z","time spent":"484.280217ms","remote":"127.0.0.1:44916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-377932\" mod_revision:1012 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-377932\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-377932\" > >"}
	{"level":"warn","ts":"2024-07-25T17:31:45.416547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.395885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85761"}
	{"level":"info","ts":"2024-07-25T17:31:45.416618Z","caller":"traceutil/trace.go:171","msg":"trace[1547704006] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1100; }","duration":"225.487932ms","start":"2024-07-25T17:31:45.191119Z","end":"2024-07-25T17:31:45.416607Z","steps":["trace[1547704006] 'agreement among raft nodes before linearized reading'  (duration: 225.263023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.41741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.557174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-25T17:31:45.417492Z","caller":"traceutil/trace.go:171","msg":"trace[1303947353] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1100; }","duration":"120.017077ms","start":"2024-07-25T17:31:45.297464Z","end":"2024-07-25T17:31:45.417481Z","steps":["trace[1303947353] 'agreement among raft nodes before linearized reading'  (duration: 119.445139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.417683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.764401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-25T17:31:45.417743Z","caller":"traceutil/trace.go:171","msg":"trace[80852776] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1100; }","duration":"149.858176ms","start":"2024-07-25T17:31:45.267876Z","end":"2024-07-25T17:31:45.417735Z","steps":["trace[80852776] 'agreement among raft nodes before linearized reading'  (duration: 149.759907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.419212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.577773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-25T17:31:45.41935Z","caller":"traceutil/trace.go:171","msg":"trace[362820918] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1100; }","duration":"211.736581ms","start":"2024-07-25T17:31:45.207605Z","end":"2024-07-25T17:31:45.419341Z","steps":["trace[362820918] 'agreement among raft nodes before linearized reading'  (duration: 210.310741ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T17:32:28.600454Z","caller":"traceutil/trace.go:171","msg":"trace[605734207] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"158.630197ms","start":"2024-07-25T17:32:28.441803Z","end":"2024-07-25T17:32:28.600433Z","steps":["trace[605734207] 'read index received'  (duration: 158.482248ms)","trace[605734207] 'applied index is now lower than readState.Index'  (duration: 147.445µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T17:32:28.600717Z","caller":"traceutil/trace.go:171","msg":"trace[2054451883] transaction","detail":"{read_only:false; response_revision:1328; number_of_response:1; }","duration":"236.40985ms","start":"2024-07-25T17:32:28.364295Z","end":"2024-07-25T17:32:28.600704Z","steps":["trace[2054451883] 'process raft request'  (duration: 236.045771ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.60092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.086634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4330"}
	{"level":"info","ts":"2024-07-25T17:32:28.600945Z","caller":"traceutil/trace.go:171","msg":"trace[1867259601] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1328; }","duration":"159.159823ms","start":"2024-07-25T17:32:28.441778Z","end":"2024-07-25T17:32:28.600938Z","steps":["trace[1867259601] 'agreement among raft nodes before linearized reading'  (duration: 159.052961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.601234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.966828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85995"}
	{"level":"info","ts":"2024-07-25T17:32:28.601257Z","caller":"traceutil/trace.go:171","msg":"trace[1953377226] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1328; }","duration":"156.013637ms","start":"2024-07-25T17:32:28.445237Z","end":"2024-07-25T17:32:28.601251Z","steps":["trace[1953377226] 'agreement among raft nodes before linearized reading'  (duration: 155.844376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.601574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.291158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85995"}
	{"level":"info","ts":"2024-07-25T17:32:28.601594Z","caller":"traceutil/trace.go:171","msg":"trace[2045294918] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1328; }","duration":"156.320505ms","start":"2024-07-25T17:32:28.445268Z","end":"2024-07-25T17:32:28.601588Z","steps":["trace[2045294918] 'agreement among raft nodes before linearized reading'  (duration: 156.194287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:33:02.578729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:33:02.234918Z","time spent":"343.80702ms","remote":"127.0.0.1:44696","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> kernel <==
	 17:35:11 up 5 min,  0 users,  load average: 0.31, 0.74, 0.40
	Linux addons-377932 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa] <==
	E0725 17:32:22.326949       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0725 17:32:22.327586       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.47.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.47.217:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.47.217:443: connect: connection refused
	I0725 17:32:22.388151       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0725 17:32:39.879850       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0725 17:32:40.668189       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0725 17:32:40.855979       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.185.29"}
	W0725 17:32:40.935506       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0725 17:33:09.804713       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0725 17:33:12.987920       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0725 17:33:32.039138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.039213       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.071185       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.071574       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.090155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.090908       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.110467       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.112186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.137390       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.137441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0725 17:33:33.091136       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0725 17:33:33.138110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0725 17:33:33.146608       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0725 17:33:41.017737       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.214.51"}
	I0725 17:35:00.984788       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.165.229"}
	
	
	==> kube-controller-manager [a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc] <==
	I0725 17:33:56.842036       1 shared_informer.go:320] Caches are synced for garbage collector
	W0725 17:34:04.616164       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:04.616321       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0725 17:34:04.778265       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0725 17:34:05.148357       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:05.148488       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:34:09.654823       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:09.654962       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:34:36.438949       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:36.439205       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:34:39.249935       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:39.250063       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:34:43.006779       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:43.006975       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:34:52.878143       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:34:52.878315       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0725 17:35:00.852359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="43.541247ms"
	I0725 17:35:00.866769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="14.310225ms"
	I0725 17:35:00.867121       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="48.558µs"
	I0725 17:35:00.874050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="68.094µs"
	I0725 17:35:03.543953       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0725 17:35:03.549063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.279µs"
	I0725 17:35:03.561044       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0725 17:35:03.909801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.825426ms"
	I0725 17:35:03.910171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="57.713µs"
	
	
	==> kube-proxy [383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61] <==
	I0725 17:30:32.953969       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:30:32.984434       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	I0725 17:30:33.130839       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:30:33.130913       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:30:33.130930       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:30:33.133102       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:30:33.133305       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:30:33.133317       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:30:33.134982       1 config.go:192] "Starting service config controller"
	I0725 17:30:33.135504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:30:33.135581       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:30:33.135598       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:30:33.138617       1 config.go:319] "Starting node config controller"
	I0725 17:30:33.138642       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:30:33.236583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:30:33.236674       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:30:33.239125       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f] <==
	W0725 17:30:10.785757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:30:10.785836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:30:10.884326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 17:30:10.884711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 17:30:10.991803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:30:10.991907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 17:30:11.043905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 17:30:11.044095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:30:11.165911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 17:30:11.165940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 17:30:11.207547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 17:30:11.207587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 17:30:11.229278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.229480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.259680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 17:30:11.259723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 17:30:11.335661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.335752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.368682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.368793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.379055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.379181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.397363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 17:30:11.397403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0725 17:30:12.998719       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 17:35:00 addons-377932 kubelet[1275]: E0725 17:35:00.843247    1275 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b21ba09-4045-4b12-897a-14574629b950" containerName="helm-test"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: E0725 17:35:00.843350    1275 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="404a7d43-869c-4137-b5a9-e4f4ce531f65" containerName="tiller"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: E0725 17:35:00.843445    1275 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a7ddbf-293f-40e8-9896-20cf181dacb1" containerName="headlamp"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: I0725 17:35:00.843546    1275 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b21ba09-4045-4b12-897a-14574629b950" containerName="helm-test"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: I0725 17:35:00.843581    1275 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a7ddbf-293f-40e8-9896-20cf181dacb1" containerName="headlamp"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: I0725 17:35:00.843654    1275 memory_manager.go:354] "RemoveStaleState removing state" podUID="404a7d43-869c-4137-b5a9-e4f4ce531f65" containerName="tiller"
	Jul 25 17:35:00 addons-377932 kubelet[1275]: I0725 17:35:00.921155    1275 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwb8\" (UniqueName: \"kubernetes.io/projected/f7916284-6975-4022-aa91-4a43f1c6e583-kube-api-access-9dwb8\") pod \"hello-world-app-6778b5fc9f-8zkzg\" (UID: \"f7916284-6975-4022-aa91-4a43f1c6e583\") " pod="default/hello-world-app-6778b5fc9f-8zkzg"
	Jul 25 17:35:01 addons-377932 kubelet[1275]: I0725 17:35:01.928719    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5qc8\" (UniqueName: \"kubernetes.io/projected/edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb-kube-api-access-h5qc8\") pod \"edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb\" (UID: \"edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb\") "
	Jul 25 17:35:01 addons-377932 kubelet[1275]: I0725 17:35:01.930854    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb-kube-api-access-h5qc8" (OuterVolumeSpecName: "kube-api-access-h5qc8") pod "edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb" (UID: "edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb"). InnerVolumeSpecName "kube-api-access-h5qc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 25 17:35:02 addons-377932 kubelet[1275]: I0725 17:35:02.030026    1275 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h5qc8\" (UniqueName: \"kubernetes.io/projected/edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb-kube-api-access-h5qc8\") on node \"addons-377932\" DevicePath \"\""
	Jul 25 17:35:02 addons-377932 kubelet[1275]: I0725 17:35:02.880255    1275 scope.go:117] "RemoveContainer" containerID="5dc081300a007e3eeb81a07f51e3755fb121b8b33d3f379849cc7d69abf9b548"
	Jul 25 17:35:03 addons-377932 kubelet[1275]: I0725 17:35:03.015781    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb" path="/var/lib/kubelet/pods/edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb/volumes"
	Jul 25 17:35:05 addons-377932 kubelet[1275]: I0725 17:35:05.016473    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57501751-2e35-47e2-8ecf-76ac61e45cc9" path="/var/lib/kubelet/pods/57501751-2e35-47e2-8ecf-76ac61e45cc9/volumes"
	Jul 25 17:35:05 addons-377932 kubelet[1275]: I0725 17:35:05.017488    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea6cd12-8bbe-4b4c-9019-69e94b6305cc" path="/var/lib/kubelet/pods/fea6cd12-8bbe-4b4c-9019-69e94b6305cc/volumes"
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.765555    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab53f737-1a06-44d7-b220-2d23dad25808-webhook-cert\") pod \"ab53f737-1a06-44d7-b220-2d23dad25808\" (UID: \"ab53f737-1a06-44d7-b220-2d23dad25808\") "
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.765606    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6jr2\" (UniqueName: \"kubernetes.io/projected/ab53f737-1a06-44d7-b220-2d23dad25808-kube-api-access-x6jr2\") pod \"ab53f737-1a06-44d7-b220-2d23dad25808\" (UID: \"ab53f737-1a06-44d7-b220-2d23dad25808\") "
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.768626    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab53f737-1a06-44d7-b220-2d23dad25808-kube-api-access-x6jr2" (OuterVolumeSpecName: "kube-api-access-x6jr2") pod "ab53f737-1a06-44d7-b220-2d23dad25808" (UID: "ab53f737-1a06-44d7-b220-2d23dad25808"). InnerVolumeSpecName "kube-api-access-x6jr2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.772172    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab53f737-1a06-44d7-b220-2d23dad25808-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ab53f737-1a06-44d7-b220-2d23dad25808" (UID: "ab53f737-1a06-44d7-b220-2d23dad25808"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.866235    1275 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ab53f737-1a06-44d7-b220-2d23dad25808-webhook-cert\") on node \"addons-377932\" DevicePath \"\""
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.866277    1275 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x6jr2\" (UniqueName: \"kubernetes.io/projected/ab53f737-1a06-44d7-b220-2d23dad25808-kube-api-access-x6jr2\") on node \"addons-377932\" DevicePath \"\""
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.906934    1275 scope.go:117] "RemoveContainer" containerID="45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952"
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.936655    1275 scope.go:117] "RemoveContainer" containerID="45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952"
	Jul 25 17:35:06 addons-377932 kubelet[1275]: E0725 17:35:06.937369    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952\": container with ID starting with 45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952 not found: ID does not exist" containerID="45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952"
	Jul 25 17:35:06 addons-377932 kubelet[1275]: I0725 17:35:06.937455    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952"} err="failed to get container status \"45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952\": rpc error: code = NotFound desc = could not find container \"45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952\": container with ID starting with 45d5984788762a9b934fadf218e2c4ecbc8efec815ea0f1bf0ae815cf1ee9952 not found: ID does not exist"
	Jul 25 17:35:07 addons-377932 kubelet[1275]: I0725 17:35:07.017546    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab53f737-1a06-44d7-b220-2d23dad25808" path="/var/lib/kubelet/pods/ab53f737-1a06-44d7-b220-2d23dad25808/volumes"
	
	
	==> storage-provisioner [b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14] <==
	I0725 17:31:03.952748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 17:31:03.972372       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 17:31:03.972424       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 17:31:03.982985       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 17:31:03.983820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96c9b105-9701-4bd5-be6c-ab3851c1b16b", APIVersion:"v1", ResourceVersion:"906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e became leader
	I0725 17:31:03.983857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e!
	I0725 17:31:04.084069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e!
	
	
	==> storage-provisioner [cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe] <==
	I0725 17:30:33.434850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 17:31:03.440771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-377932 -n addons-377932
helpers_test.go:261: (dbg) Run:  kubectl --context addons-377932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (367.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.828157ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-nn7lw" [4b69ce7d-1c27-46dc-8f29-5bab086365eb] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.180604306s
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (77.47371ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m1.713551587s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (67.530108ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m5.377009999s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (69.195271ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m8.052710708s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (67.379636ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m15.587288137s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (72.211895ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m24.665505368s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (61.126131ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m43.535926433s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (64.010951ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 2m55.491834375s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (62.092437ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 3m25.286548598s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (65.980074ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 4m16.001833941s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (60.920687ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 5m5.768181533s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (64.818577ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 5m47.230436181s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (61.476883ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 6m51.902178145s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (64.285706ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 7m24.76939164s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-377932 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-377932 top pods -n kube-system: exit status 1 (61.630401ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-d9w47, age: 8m1.537751586s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-377932 -n addons-377932
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 logs -n 25: (1.159640824s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-108558                                                                     | download-only-108558 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-606783 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | binary-mirror-606783                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:44459                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-606783                                                                     | binary-mirror-606783 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-377932 --wait=true                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| ip      | addons-377932 ip                                                                            | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-377932 ssh curl -s                                                                   | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-377932 ssh cat                                                                       | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:32 UTC |
	|         | /opt/local-path-provisioner/pvc-21933440-c7fa-4b82-89b2-60e7bd69bee6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:32 UTC | 25 Jul 24 17:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-377932 addons                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-377932 addons                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | -p addons-377932                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | -p addons-377932                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | addons-377932                                                                               |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:33 UTC | 25 Jul 24 17:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-377932 ip                                                                            | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-377932 addons disable                                                                | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:35 UTC | 25 Jul 24 17:35 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-377932 addons                                                                        | addons-377932        | jenkins | v1.33.1 | 25 Jul 24 17:38 UTC | 25 Jul 24 17:38 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:29:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:29:35.483663   14037 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:29:35.483933   14037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:35.483943   14037 out.go:304] Setting ErrFile to fd 2...
	I0725 17:29:35.483949   14037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:35.484123   14037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:29:35.484759   14037 out.go:298] Setting JSON to false
	I0725 17:29:35.485558   14037 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":719,"bootTime":1721927856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:29:35.485613   14037 start.go:139] virtualization: kvm guest
	I0725 17:29:35.487628   14037 out.go:177] * [addons-377932] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:29:35.489115   14037 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:29:35.489126   14037 notify.go:220] Checking for updates...
	I0725 17:29:35.491505   14037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:29:35.492583   14037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:29:35.493766   14037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:35.495091   14037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:29:35.496263   14037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:29:35.497460   14037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:29:35.528353   14037 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 17:29:35.529444   14037 start.go:297] selected driver: kvm2
	I0725 17:29:35.529455   14037 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:29:35.529465   14037 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:29:35.530104   14037 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:35.530169   14037 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:29:35.544383   14037 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:29:35.544429   14037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:29:35.544669   14037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:29:35.544726   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:29:35.544744   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:29:35.544760   14037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 17:29:35.544807   14037 start.go:340] cluster config:
	{Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:29:35.544914   14037 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:35.546565   14037 out.go:177] * Starting "addons-377932" primary control-plane node in "addons-377932" cluster
	I0725 17:29:35.547631   14037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:35.547658   14037 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:29:35.547665   14037 cache.go:56] Caching tarball of preloaded images
	I0725 17:29:35.547729   14037 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:29:35.547738   14037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:29:35.548013   14037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json ...
	I0725 17:29:35.548029   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json: {Name:mka8eb86bdc511d9930f24e5d458457e2aefedee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:29:35.548138   14037 start.go:360] acquireMachinesLock for addons-377932: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:29:35.548175   14037 start.go:364] duration metric: took 26.578µs to acquireMachinesLock for "addons-377932"
	I0725 17:29:35.548191   14037 start.go:93] Provisioning new machine with config: &{Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:29:35.548239   14037 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 17:29:35.549654   14037 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0725 17:29:35.549762   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:29:35.549795   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:29:35.563619   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0725 17:29:35.564008   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:29:35.564513   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:29:35.564532   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:29:35.564939   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:29:35.565120   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:35.565296   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:35.565448   14037 start.go:159] libmachine.API.Create for "addons-377932" (driver="kvm2")
	I0725 17:29:35.565526   14037 client.go:168] LocalClient.Create starting
	I0725 17:29:35.565566   14037 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:29:35.971168   14037 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:29:36.120642   14037 main.go:141] libmachine: Running pre-create checks...
	I0725 17:29:36.120664   14037 main.go:141] libmachine: (addons-377932) Calling .PreCreateCheck
	I0725 17:29:36.121268   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:36.121744   14037 main.go:141] libmachine: Creating machine...
	I0725 17:29:36.121758   14037 main.go:141] libmachine: (addons-377932) Calling .Create
	I0725 17:29:36.121970   14037 main.go:141] libmachine: (addons-377932) Creating KVM machine...
	I0725 17:29:36.123219   14037 main.go:141] libmachine: (addons-377932) DBG | found existing default KVM network
	I0725 17:29:36.124069   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.123916   14059 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001125f0}
	I0725 17:29:36.124096   14037 main.go:141] libmachine: (addons-377932) DBG | created network xml: 
	I0725 17:29:36.124111   14037 main.go:141] libmachine: (addons-377932) DBG | <network>
	I0725 17:29:36.124120   14037 main.go:141] libmachine: (addons-377932) DBG |   <name>mk-addons-377932</name>
	I0725 17:29:36.124130   14037 main.go:141] libmachine: (addons-377932) DBG |   <dns enable='no'/>
	I0725 17:29:36.124139   14037 main.go:141] libmachine: (addons-377932) DBG |   
	I0725 17:29:36.124155   14037 main.go:141] libmachine: (addons-377932) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 17:29:36.124182   14037 main.go:141] libmachine: (addons-377932) DBG |     <dhcp>
	I0725 17:29:36.124202   14037 main.go:141] libmachine: (addons-377932) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 17:29:36.124213   14037 main.go:141] libmachine: (addons-377932) DBG |     </dhcp>
	I0725 17:29:36.124220   14037 main.go:141] libmachine: (addons-377932) DBG |   </ip>
	I0725 17:29:36.124228   14037 main.go:141] libmachine: (addons-377932) DBG |   
	I0725 17:29:36.124239   14037 main.go:141] libmachine: (addons-377932) DBG | </network>
	I0725 17:29:36.124249   14037 main.go:141] libmachine: (addons-377932) DBG | 
	I0725 17:29:36.129539   14037 main.go:141] libmachine: (addons-377932) DBG | trying to create private KVM network mk-addons-377932 192.168.39.0/24...
	I0725 17:29:36.193986   14037 main.go:141] libmachine: (addons-377932) DBG | private KVM network mk-addons-377932 192.168.39.0/24 created
	I0725 17:29:36.194027   14037 main.go:141] libmachine: (addons-377932) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 ...
	I0725 17:29:36.194047   14037 main.go:141] libmachine: (addons-377932) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:29:36.194055   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.193979   14059 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:36.194123   14037 main.go:141] libmachine: (addons-377932) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:29:36.488956   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.488815   14059 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa...
	I0725 17:29:36.635362   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.635249   14059 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/addons-377932.rawdisk...
	I0725 17:29:36.635387   14037 main.go:141] libmachine: (addons-377932) DBG | Writing magic tar header
	I0725 17:29:36.635400   14037 main.go:141] libmachine: (addons-377932) DBG | Writing SSH key tar header
	I0725 17:29:36.635412   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:36.635359   14059 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 ...
	I0725 17:29:36.635472   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932
	I0725 17:29:36.635519   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:29:36.635543   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:36.635557   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932 (perms=drwx------)
	I0725 17:29:36.635569   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:29:36.635596   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:29:36.635608   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:29:36.635618   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:29:36.635633   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:29:36.635646   14037 main.go:141] libmachine: (addons-377932) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:29:36.635657   14037 main.go:141] libmachine: (addons-377932) Creating domain...
	I0725 17:29:36.635684   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:29:36.635702   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:29:36.635713   14037 main.go:141] libmachine: (addons-377932) DBG | Checking permissions on dir: /home
	I0725 17:29:36.635725   14037 main.go:141] libmachine: (addons-377932) DBG | Skipping /home - not owner
	I0725 17:29:36.636506   14037 main.go:141] libmachine: (addons-377932) define libvirt domain using xml: 
	I0725 17:29:36.636524   14037 main.go:141] libmachine: (addons-377932) <domain type='kvm'>
	I0725 17:29:36.636534   14037 main.go:141] libmachine: (addons-377932)   <name>addons-377932</name>
	I0725 17:29:36.636546   14037 main.go:141] libmachine: (addons-377932)   <memory unit='MiB'>4000</memory>
	I0725 17:29:36.636557   14037 main.go:141] libmachine: (addons-377932)   <vcpu>2</vcpu>
	I0725 17:29:36.636564   14037 main.go:141] libmachine: (addons-377932)   <features>
	I0725 17:29:36.636573   14037 main.go:141] libmachine: (addons-377932)     <acpi/>
	I0725 17:29:36.636583   14037 main.go:141] libmachine: (addons-377932)     <apic/>
	I0725 17:29:36.636593   14037 main.go:141] libmachine: (addons-377932)     <pae/>
	I0725 17:29:36.636600   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.636608   14037 main.go:141] libmachine: (addons-377932)   </features>
	I0725 17:29:36.636615   14037 main.go:141] libmachine: (addons-377932)   <cpu mode='host-passthrough'>
	I0725 17:29:36.636621   14037 main.go:141] libmachine: (addons-377932)   
	I0725 17:29:36.636633   14037 main.go:141] libmachine: (addons-377932)   </cpu>
	I0725 17:29:36.636659   14037 main.go:141] libmachine: (addons-377932)   <os>
	I0725 17:29:36.636682   14037 main.go:141] libmachine: (addons-377932)     <type>hvm</type>
	I0725 17:29:36.636695   14037 main.go:141] libmachine: (addons-377932)     <boot dev='cdrom'/>
	I0725 17:29:36.636706   14037 main.go:141] libmachine: (addons-377932)     <boot dev='hd'/>
	I0725 17:29:36.636717   14037 main.go:141] libmachine: (addons-377932)     <bootmenu enable='no'/>
	I0725 17:29:36.636731   14037 main.go:141] libmachine: (addons-377932)   </os>
	I0725 17:29:36.636759   14037 main.go:141] libmachine: (addons-377932)   <devices>
	I0725 17:29:36.636783   14037 main.go:141] libmachine: (addons-377932)     <disk type='file' device='cdrom'>
	I0725 17:29:36.636803   14037 main.go:141] libmachine: (addons-377932)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/boot2docker.iso'/>
	I0725 17:29:36.636814   14037 main.go:141] libmachine: (addons-377932)       <target dev='hdc' bus='scsi'/>
	I0725 17:29:36.636826   14037 main.go:141] libmachine: (addons-377932)       <readonly/>
	I0725 17:29:36.636836   14037 main.go:141] libmachine: (addons-377932)     </disk>
	I0725 17:29:36.636849   14037 main.go:141] libmachine: (addons-377932)     <disk type='file' device='disk'>
	I0725 17:29:36.636865   14037 main.go:141] libmachine: (addons-377932)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:29:36.636882   14037 main.go:141] libmachine: (addons-377932)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/addons-377932.rawdisk'/>
	I0725 17:29:36.636893   14037 main.go:141] libmachine: (addons-377932)       <target dev='hda' bus='virtio'/>
	I0725 17:29:36.636904   14037 main.go:141] libmachine: (addons-377932)     </disk>
	I0725 17:29:36.636914   14037 main.go:141] libmachine: (addons-377932)     <interface type='network'>
	I0725 17:29:36.636927   14037 main.go:141] libmachine: (addons-377932)       <source network='mk-addons-377932'/>
	I0725 17:29:36.636941   14037 main.go:141] libmachine: (addons-377932)       <model type='virtio'/>
	I0725 17:29:36.636953   14037 main.go:141] libmachine: (addons-377932)     </interface>
	I0725 17:29:36.636963   14037 main.go:141] libmachine: (addons-377932)     <interface type='network'>
	I0725 17:29:36.636975   14037 main.go:141] libmachine: (addons-377932)       <source network='default'/>
	I0725 17:29:36.636985   14037 main.go:141] libmachine: (addons-377932)       <model type='virtio'/>
	I0725 17:29:36.636996   14037 main.go:141] libmachine: (addons-377932)     </interface>
	I0725 17:29:36.637009   14037 main.go:141] libmachine: (addons-377932)     <serial type='pty'>
	I0725 17:29:36.637021   14037 main.go:141] libmachine: (addons-377932)       <target port='0'/>
	I0725 17:29:36.637031   14037 main.go:141] libmachine: (addons-377932)     </serial>
	I0725 17:29:36.637042   14037 main.go:141] libmachine: (addons-377932)     <console type='pty'>
	I0725 17:29:36.637054   14037 main.go:141] libmachine: (addons-377932)       <target type='serial' port='0'/>
	I0725 17:29:36.637065   14037 main.go:141] libmachine: (addons-377932)     </console>
	I0725 17:29:36.637078   14037 main.go:141] libmachine: (addons-377932)     <rng model='virtio'>
	I0725 17:29:36.637092   14037 main.go:141] libmachine: (addons-377932)       <backend model='random'>/dev/random</backend>
	I0725 17:29:36.637103   14037 main.go:141] libmachine: (addons-377932)     </rng>
	I0725 17:29:36.637113   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.637123   14037 main.go:141] libmachine: (addons-377932)     
	I0725 17:29:36.637134   14037 main.go:141] libmachine: (addons-377932)   </devices>
	I0725 17:29:36.637144   14037 main.go:141] libmachine: (addons-377932) </domain>
	I0725 17:29:36.637157   14037 main.go:141] libmachine: (addons-377932) 
	I0725 17:29:36.642609   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:f5:2a:49 in network default
	I0725 17:29:36.643102   14037 main.go:141] libmachine: (addons-377932) Ensuring networks are active...
	I0725 17:29:36.643127   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:36.643638   14037 main.go:141] libmachine: (addons-377932) Ensuring network default is active
	I0725 17:29:36.643911   14037 main.go:141] libmachine: (addons-377932) Ensuring network mk-addons-377932 is active
	I0725 17:29:36.644358   14037 main.go:141] libmachine: (addons-377932) Getting domain xml...
	I0725 17:29:36.644924   14037 main.go:141] libmachine: (addons-377932) Creating domain...
	I0725 17:29:38.031137   14037 main.go:141] libmachine: (addons-377932) Waiting to get IP...
	I0725 17:29:38.031801   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.032127   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.032154   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.032077   14059 retry.go:31] will retry after 198.348494ms: waiting for machine to come up
	I0725 17:29:38.232504   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.232870   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.232898   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.232823   14059 retry.go:31] will retry after 371.403368ms: waiting for machine to come up
	I0725 17:29:38.605211   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.605569   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.605590   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.605536   14059 retry.go:31] will retry after 391.428532ms: waiting for machine to come up
	I0725 17:29:38.998030   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:38.998506   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:38.998534   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:38.998443   14059 retry.go:31] will retry after 559.487337ms: waiting for machine to come up
	I0725 17:29:39.559175   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:39.559530   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:39.559558   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:39.559502   14059 retry.go:31] will retry after 656.233772ms: waiting for machine to come up
	I0725 17:29:40.216859   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:40.217419   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:40.217439   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:40.217375   14059 retry.go:31] will retry after 657.72817ms: waiting for machine to come up
	I0725 17:29:40.876932   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:40.877423   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:40.877450   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:40.877375   14059 retry.go:31] will retry after 1.10158035s: waiting for machine to come up
	I0725 17:29:41.980613   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:41.981069   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:41.981098   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:41.981029   14059 retry.go:31] will retry after 1.319598156s: waiting for machine to come up
	I0725 17:29:43.302764   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:43.303193   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:43.303219   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:43.303139   14059 retry.go:31] will retry after 1.160376448s: waiting for machine to come up
	I0725 17:29:44.465308   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:44.465605   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:44.465626   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:44.465569   14059 retry.go:31] will retry after 2.267893376s: waiting for machine to come up
	I0725 17:29:46.735888   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:46.736393   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:46.736422   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:46.736340   14059 retry.go:31] will retry after 2.844725176s: waiting for machine to come up
	I0725 17:29:49.582437   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:49.582883   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:49.582909   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:49.582814   14059 retry.go:31] will retry after 2.873112905s: waiting for machine to come up
	I0725 17:29:52.458443   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:52.458945   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find current IP address of domain addons-377932 in network mk-addons-377932
	I0725 17:29:52.458970   14037 main.go:141] libmachine: (addons-377932) DBG | I0725 17:29:52.458910   14059 retry.go:31] will retry after 3.065951913s: waiting for machine to come up
	I0725 17:29:55.528120   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.528556   14037 main.go:141] libmachine: (addons-377932) Found IP for machine: 192.168.39.150
	I0725 17:29:55.528576   14037 main.go:141] libmachine: (addons-377932) Reserving static IP address...
	I0725 17:29:55.528589   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has current primary IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.528991   14037 main.go:141] libmachine: (addons-377932) DBG | unable to find host DHCP lease matching {name: "addons-377932", mac: "52:54:00:b4:a8:62", ip: "192.168.39.150"} in network mk-addons-377932
	I0725 17:29:55.598128   14037 main.go:141] libmachine: (addons-377932) DBG | Getting to WaitForSSH function...
	I0725 17:29:55.598158   14037 main.go:141] libmachine: (addons-377932) Reserved static IP address: 192.168.39.150
	I0725 17:29:55.598182   14037 main.go:141] libmachine: (addons-377932) Waiting for SSH to be available...
	I0725 17:29:55.600769   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.601146   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.601176   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.601327   14037 main.go:141] libmachine: (addons-377932) DBG | Using SSH client type: external
	I0725 17:29:55.601356   14037 main.go:141] libmachine: (addons-377932) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa (-rw-------)
	I0725 17:29:55.601385   14037 main.go:141] libmachine: (addons-377932) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:29:55.601399   14037 main.go:141] libmachine: (addons-377932) DBG | About to run SSH command:
	I0725 17:29:55.601410   14037 main.go:141] libmachine: (addons-377932) DBG | exit 0
	I0725 17:29:55.732227   14037 main.go:141] libmachine: (addons-377932) DBG | SSH cmd err, output: <nil>: 
	I0725 17:29:55.732555   14037 main.go:141] libmachine: (addons-377932) KVM machine creation complete!
	I0725 17:29:55.732885   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:55.733436   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:55.733697   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:55.733874   14037 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:29:55.733889   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:29:55.735248   14037 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:29:55.735261   14037 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:29:55.735266   14037 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:29:55.735272   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.737279   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.737647   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.737677   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.737844   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.738057   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.738206   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.738336   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.738497   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.738664   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.738675   14037 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:29:55.843347   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:29:55.843371   14037 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:29:55.843380   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.846082   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.846408   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.846430   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.846560   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.846752   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.846909   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.847039   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.847196   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.847379   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.847390   14037 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:29:55.948504   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:29:55.948553   14037 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:29:55.948571   14037 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:29:55.948580   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:55.948823   14037 buildroot.go:166] provisioning hostname "addons-377932"
	I0725 17:29:55.948847   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:55.949027   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:55.952038   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.952422   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:55.952449   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:55.952576   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:55.952752   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.952927   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:55.953169   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:55.953330   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:55.953527   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:55.953541   14037 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-377932 && echo "addons-377932" | sudo tee /etc/hostname
	I0725 17:29:56.071546   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-377932
	
	I0725 17:29:56.071576   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.074203   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.074540   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.074571   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.074740   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.074906   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.075049   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.075223   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.075423   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.075586   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.075601   14037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377932/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:29:56.189757   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:29:56.189792   14037 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:29:56.189837   14037 buildroot.go:174] setting up certificates
	I0725 17:29:56.189848   14037 provision.go:84] configureAuth start
	I0725 17:29:56.189860   14037 main.go:141] libmachine: (addons-377932) Calling .GetMachineName
	I0725 17:29:56.190175   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:56.192805   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.193166   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.193191   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.193340   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.195256   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.195522   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.195545   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.195662   14037 provision.go:143] copyHostCerts
	I0725 17:29:56.195743   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:29:56.195862   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:29:56.195921   14037 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:29:56.195968   14037 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.addons-377932 san=[127.0.0.1 192.168.39.150 addons-377932 localhost minikube]
	I0725 17:29:56.430674   14037 provision.go:177] copyRemoteCerts
	I0725 17:29:56.430734   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:29:56.430755   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.433411   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.433736   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.433764   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.433900   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.434110   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.434337   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.434463   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:56.514117   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:29:56.536635   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:29:56.557659   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:29:56.578634   14037 provision.go:87] duration metric: took 388.772402ms to configureAuth
	I0725 17:29:56.578659   14037 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:29:56.578826   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:29:56.578906   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.581591   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.581910   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.581931   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.582078   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.582274   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.582425   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.582653   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.582785   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.582974   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.582990   14037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:29:56.853914   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:29:56.853954   14037 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:29:56.853967   14037 main.go:141] libmachine: (addons-377932) Calling .GetURL
	I0725 17:29:56.855204   14037 main.go:141] libmachine: (addons-377932) DBG | Using libvirt version 6000000
	I0725 17:29:56.857423   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.857740   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.857766   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.857920   14037 main.go:141] libmachine: Docker is up and running!
	I0725 17:29:56.857936   14037 main.go:141] libmachine: Reticulating splines...
	I0725 17:29:56.857945   14037 client.go:171] duration metric: took 21.292406546s to LocalClient.Create
	I0725 17:29:56.857971   14037 start.go:167] duration metric: took 21.292528939s to libmachine.API.Create "addons-377932"
	I0725 17:29:56.857984   14037 start.go:293] postStartSetup for "addons-377932" (driver="kvm2")
	I0725 17:29:56.857997   14037 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:29:56.858017   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:56.858246   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:29:56.858276   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.860817   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.861152   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.861175   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.861293   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.861497   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.861661   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.861799   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:56.941980   14037 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:29:56.945875   14037 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:29:56.945894   14037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:29:56.945965   14037 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:29:56.945987   14037 start.go:296] duration metric: took 87.998176ms for postStartSetup
	I0725 17:29:56.946017   14037 main.go:141] libmachine: (addons-377932) Calling .GetConfigRaw
	I0725 17:29:56.946540   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:56.949001   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.949409   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.949439   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.949767   14037 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/config.json ...
	I0725 17:29:56.949973   14037 start.go:128] duration metric: took 21.401723832s to createHost
	I0725 17:29:56.949996   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:56.952743   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.953087   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:56.953108   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:56.953233   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:56.953417   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.953561   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:56.953721   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:56.953880   14037 main.go:141] libmachine: Using SSH client type: native
	I0725 17:29:56.954031   14037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0725 17:29:56.954040   14037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:29:57.060703   14037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721928597.033917637
	
	I0725 17:29:57.060725   14037 fix.go:216] guest clock: 1721928597.033917637
	I0725 17:29:57.060733   14037 fix.go:229] Guest: 2024-07-25 17:29:57.033917637 +0000 UTC Remote: 2024-07-25 17:29:56.949984849 +0000 UTC m=+21.498950979 (delta=83.932788ms)
	I0725 17:29:57.060777   14037 fix.go:200] guest clock delta is within tolerance: 83.932788ms
	I0725 17:29:57.060783   14037 start.go:83] releasing machines lock for "addons-377932", held for 21.512599051s
	I0725 17:29:57.060804   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.061049   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:57.063861   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.064183   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.064202   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.064391   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.064871   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.065115   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:29:57.065208   14037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:29:57.065246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:57.065324   14037 ssh_runner.go:195] Run: cat /version.json
	I0725 17:29:57.065340   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:29:57.067884   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.067980   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068317   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.068361   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068383   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:57.068395   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:57.068557   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:57.068659   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:29:57.068821   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:57.068840   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:29:57.068956   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:57.068969   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:29:57.069075   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:57.069080   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:29:57.192662   14037 ssh_runner.go:195] Run: systemctl --version
	I0725 17:29:57.198592   14037 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:29:57.347612   14037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:29:57.353347   14037 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:29:57.353431   14037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:29:57.367887   14037 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:29:57.367911   14037 start.go:495] detecting cgroup driver to use...
	I0725 17:29:57.367981   14037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:29:57.382431   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:29:57.395385   14037 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:29:57.395448   14037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:29:57.408459   14037 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:29:57.422925   14037 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:29:57.549552   14037 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:29:57.699999   14037 docker.go:233] disabling docker service ...
	I0725 17:29:57.700069   14037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:29:57.713255   14037 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:29:57.725340   14037 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:29:57.839839   14037 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:29:57.954689   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:29:57.972570   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:29:57.989913   14037 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:29:57.989980   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:57.999476   14037 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:29:57.999541   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.009406   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.020292   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.029892   14037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:29:58.039611   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.048918   14037 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.063959   14037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:29:58.073462   14037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:29:58.082074   14037 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:29:58.082125   14037 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:29:58.093842   14037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:29:58.102684   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:29:58.209093   14037 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:29:58.334896   14037 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:29:58.334984   14037 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:29:58.339238   14037 start.go:563] Will wait 60s for crictl version
	I0725 17:29:58.339301   14037 ssh_runner.go:195] Run: which crictl
	I0725 17:29:58.342595   14037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:29:58.378421   14037 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:29:58.378516   14037 ssh_runner.go:195] Run: crio --version
	I0725 17:29:58.405153   14037 ssh_runner.go:195] Run: crio --version
	I0725 17:29:58.434375   14037 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:29:58.435799   14037 main.go:141] libmachine: (addons-377932) Calling .GetIP
	I0725 17:29:58.438439   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:58.438772   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:29:58.438797   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:29:58.439073   14037 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:29:58.442923   14037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:29:58.454764   14037 kubeadm.go:883] updating cluster {Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:29:58.454865   14037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:58.454907   14037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:29:58.484834   14037 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 17:29:58.484897   14037 ssh_runner.go:195] Run: which lz4
	I0725 17:29:58.488525   14037 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 17:29:58.492306   14037 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 17:29:58.492352   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 17:29:59.619950   14037 crio.go:462] duration metric: took 1.131449747s to copy over tarball
	I0725 17:29:59.620025   14037 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 17:30:01.853326   14037 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.233273989s)
	I0725 17:30:01.853361   14037 crio.go:469] duration metric: took 2.233384178s to extract the tarball
	I0725 17:30:01.853368   14037 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 17:30:01.890983   14037 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:30:01.934697   14037 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:30:01.934720   14037 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:30:01.934729   14037 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.30.3 crio true true} ...
	I0725 17:30:01.934856   14037 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-377932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:30:01.934934   14037 ssh_runner.go:195] Run: crio config
	I0725 17:30:01.985104   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:30:01.985125   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:30:01.985137   14037 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:30:01.985157   14037 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377932 NodeName:addons-377932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:30:01.985284   14037 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377932"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:30:01.985341   14037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:30:01.995435   14037 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:30:01.995507   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 17:30:02.004650   14037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:30:02.020294   14037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:30:02.035791   14037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0725 17:30:02.050739   14037 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0725 17:30:02.054487   14037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:30:02.065514   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:30:02.177851   14037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:30:02.193963   14037 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932 for IP: 192.168.39.150
	I0725 17:30:02.193990   14037 certs.go:194] generating shared ca certs ...
	I0725 17:30:02.194009   14037 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.194181   14037 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:30:02.356378   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt ...
	I0725 17:30:02.356409   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt: {Name:mk4dbfb6c929c0f89f5410dfe7f5a6ded2c7abbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.356632   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key ...
	I0725 17:30:02.356650   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key: {Name:mk4e33c2ec36f72504eaacd6c4453cec5f6a0fdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.356770   14037 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:30:02.591810   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt ...
	I0725 17:30:02.591844   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt: {Name:mk9d6644fd5c0d5e0ce0a831a082f277ae778296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.592032   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key ...
	I0725 17:30:02.592044   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key: {Name:mk66d8bc8e5de2f635608853f8a33928fea3e40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.592116   14037 certs.go:256] generating profile certs ...
	I0725 17:30:02.592169   14037 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key
	I0725 17:30:02.592184   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt with IP's: []
	I0725 17:30:02.913501   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt ...
	I0725 17:30:02.913533   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: {Name:mkf02e505348e429a8c13e822a6b4978fc12c96e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.913709   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key ...
	I0725 17:30:02.913721   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.key: {Name:mkcc39f27e83991ea55ff0cd42be2c158789e3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:02.913797   14037 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb
	I0725 17:30:02.913817   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150]
	I0725 17:30:03.101135   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb ...
	I0725 17:30:03.101164   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb: {Name:mka12d82691d7fddeaa9f79458083ad330ae80e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.101323   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb ...
	I0725 17:30:03.101336   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb: {Name:mk4c01ac180141516918736912bcc92e918f5599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.101402   14037 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt.312dc7bb -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt
	I0725 17:30:03.101476   14037 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key.312dc7bb -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key
	I0725 17:30:03.101521   14037 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key
	I0725 17:30:03.101538   14037 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt with IP's: []
	I0725 17:30:03.186946   14037 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt ...
	I0725 17:30:03.186974   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt: {Name:mk94acbb828e3670ee4984e84cb9a6002a81e64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.187175   14037 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key ...
	I0725 17:30:03.187190   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key: {Name:mkf228049e0f765d2437faa2c80c2a597524df60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:03.187371   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:30:03.187403   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:30:03.187429   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:30:03.187451   14037 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:30:03.188021   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:30:03.214966   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:30:03.242170   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:30:03.269132   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:30:03.295269   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 17:30:03.318933   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:30:03.341333   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:30:03.363394   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:30:03.385295   14037 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:30:03.407436   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:30:03.423401   14037 ssh_runner.go:195] Run: openssl version
	I0725 17:30:03.429810   14037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:30:03.440419   14037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.444571   14037 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.444626   14037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:30:03.450185   14037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:30:03.460783   14037 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:30:03.465066   14037 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:30:03.465126   14037 kubeadm.go:392] StartCluster: {Name:addons-377932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-377932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:30:03.465238   14037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:30:03.465293   14037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:30:03.498504   14037 cri.go:89] found id: ""
	I0725 17:30:03.498576   14037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:30:03.508358   14037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:30:03.517827   14037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:30:03.526652   14037 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:30:03.526676   14037 kubeadm.go:157] found existing configuration files:
	
	I0725 17:30:03.526722   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:30:03.535418   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 17:30:03.535491   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 17:30:03.544526   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:30:03.552818   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 17:30:03.552873   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 17:30:03.561886   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:30:03.570410   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 17:30:03.570460   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:30:03.579110   14037 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:30:03.587429   14037 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 17:30:03.587491   14037 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:30:03.596300   14037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 17:30:03.776546   14037 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 17:30:13.711710   14037 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 17:30:13.711780   14037 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 17:30:13.711899   14037 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 17:30:13.712001   14037 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 17:30:13.712088   14037 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 17:30:13.712183   14037 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 17:30:13.714417   14037 out.go:204]   - Generating certificates and keys ...
	I0725 17:30:13.714520   14037 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 17:30:13.714609   14037 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 17:30:13.714701   14037 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 17:30:13.714760   14037 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 17:30:13.714808   14037 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 17:30:13.714851   14037 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 17:30:13.714903   14037 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 17:30:13.715068   14037 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-377932 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0725 17:30:13.715143   14037 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 17:30:13.715299   14037 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-377932 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0725 17:30:13.715458   14037 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 17:30:13.715556   14037 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 17:30:13.715618   14037 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 17:30:13.715700   14037 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 17:30:13.715774   14037 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 17:30:13.715828   14037 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 17:30:13.715881   14037 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 17:30:13.715978   14037 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 17:30:13.716042   14037 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 17:30:13.716118   14037 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 17:30:13.716173   14037 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 17:30:13.717526   14037 out.go:204]   - Booting up control plane ...
	I0725 17:30:13.717614   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 17:30:13.717675   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 17:30:13.717729   14037 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 17:30:13.717815   14037 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 17:30:13.717884   14037 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 17:30:13.717934   14037 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 17:30:13.718121   14037 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 17:30:13.718220   14037 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 17:30:13.718274   14037 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.28175ms
	I0725 17:30:13.718350   14037 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 17:30:13.718410   14037 kubeadm.go:310] [api-check] The API server is healthy after 5.501270616s
	I0725 17:30:13.718501   14037 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 17:30:13.718601   14037 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 17:30:13.718656   14037 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 17:30:13.718802   14037 kubeadm.go:310] [mark-control-plane] Marking the node addons-377932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 17:30:13.718861   14037 kubeadm.go:310] [bootstrap-token] Using token: kzvuql.b3y2zkhnhyb7z65l
	I0725 17:30:13.720239   14037 out.go:204]   - Configuring RBAC rules ...
	I0725 17:30:13.720387   14037 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 17:30:13.720463   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 17:30:13.720578   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 17:30:13.720674   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 17:30:13.720766   14037 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 17:30:13.720831   14037 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 17:30:13.720942   14037 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 17:30:13.720977   14037 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 17:30:13.721014   14037 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 17:30:13.721019   14037 kubeadm.go:310] 
	I0725 17:30:13.721063   14037 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 17:30:13.721069   14037 kubeadm.go:310] 
	I0725 17:30:13.721128   14037 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 17:30:13.721136   14037 kubeadm.go:310] 
	I0725 17:30:13.721161   14037 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 17:30:13.721252   14037 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 17:30:13.721412   14037 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 17:30:13.721443   14037 kubeadm.go:310] 
	I0725 17:30:13.721530   14037 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 17:30:13.721557   14037 kubeadm.go:310] 
	I0725 17:30:13.721624   14037 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 17:30:13.721644   14037 kubeadm.go:310] 
	I0725 17:30:13.721727   14037 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 17:30:13.721799   14037 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 17:30:13.721866   14037 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 17:30:13.721873   14037 kubeadm.go:310] 
	I0725 17:30:13.721963   14037 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 17:30:13.722088   14037 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 17:30:13.722101   14037 kubeadm.go:310] 
	I0725 17:30:13.722208   14037 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kzvuql.b3y2zkhnhyb7z65l \
	I0725 17:30:13.722352   14037 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 17:30:13.722377   14037 kubeadm.go:310] 	--control-plane 
	I0725 17:30:13.722393   14037 kubeadm.go:310] 
	I0725 17:30:13.722501   14037 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 17:30:13.722512   14037 kubeadm.go:310] 
	I0725 17:30:13.722618   14037 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kzvuql.b3y2zkhnhyb7z65l \
	I0725 17:30:13.722712   14037 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 17:30:13.722744   14037 cni.go:84] Creating CNI manager for ""
	I0725 17:30:13.722756   14037 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:30:13.724409   14037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 17:30:13.725689   14037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 17:30:13.736164   14037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 17:30:13.753135   14037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:30:13.753221   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:13.753251   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377932 minikube.k8s.io/updated_at=2024_07_25T17_30_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=addons-377932 minikube.k8s.io/primary=true
	I0725 17:30:13.771636   14037 ops.go:34] apiserver oom_adj: -16
	I0725 17:30:13.932222   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:14.432846   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:14.932535   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:15.432979   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:15.933055   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:16.432473   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:16.932766   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:17.432505   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:17.933274   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:18.432934   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:18.933196   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:19.433077   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:19.932599   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:20.432972   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:20.932923   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:21.432286   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:21.933277   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:22.432425   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:22.932503   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:23.432733   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:23.932576   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:24.432996   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:24.932595   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:25.432397   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:25.932523   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:26.432818   14037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:30:26.508179   14037 kubeadm.go:1113] duration metric: took 12.755028778s to wait for elevateKubeSystemPrivileges
	I0725 17:30:26.508217   14037 kubeadm.go:394] duration metric: took 23.043092848s to StartCluster
	I0725 17:30:26.508239   14037 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:26.508376   14037 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:30:26.508749   14037 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:30:26.508938   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:30:26.508967   14037 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:30:26.509037   14037 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0725 17:30:26.509115   14037 addons.go:69] Setting yakd=true in profile "addons-377932"
	I0725 17:30:26.509119   14037 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-377932"
	I0725 17:30:26.509146   14037 addons.go:234] Setting addon yakd=true in "addons-377932"
	I0725 17:30:26.509173   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509214   14037 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-377932"
	I0725 17:30:26.509221   14037 addons.go:69] Setting registry=true in profile "addons-377932"
	I0725 17:30:26.509206   14037 addons.go:69] Setting helm-tiller=true in profile "addons-377932"
	I0725 17:30:26.509226   14037 addons.go:69] Setting metrics-server=true in profile "addons-377932"
	I0725 17:30:26.509264   14037 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-377932"
	I0725 17:30:26.509267   14037 addons.go:69] Setting default-storageclass=true in profile "addons-377932"
	I0725 17:30:26.509271   14037 addons.go:69] Setting gcp-auth=true in profile "addons-377932"
	I0725 17:30:26.509288   14037 mustload.go:65] Loading cluster: addons-377932
	I0725 17:30:26.509289   14037 addons.go:234] Setting addon helm-tiller=true in "addons-377932"
	I0725 17:30:26.509301   14037 addons.go:69] Setting ingress=true in profile "addons-377932"
	I0725 17:30:26.509314   14037 addons.go:69] Setting volcano=true in profile "addons-377932"
	I0725 17:30:26.509333   14037 addons.go:234] Setting addon ingress=true in "addons-377932"
	I0725 17:30:26.509804   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509327   14037 addons.go:69] Setting volumesnapshots=true in profile "addons-377932"
	I0725 17:30:26.509892   14037 addons.go:69] Setting inspektor-gadget=true in profile "addons-377932"
	I0725 17:30:26.509256   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509932   14037 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-377932"
	I0725 17:30:26.509173   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:30:26.509141   14037 addons.go:69] Setting cloud-spanner=true in profile "addons-377932"
	I0725 17:30:26.509302   14037 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-377932"
	I0725 17:30:26.510015   14037 addons.go:234] Setting addon cloud-spanner=true in "addons-377932"
	I0725 17:30:26.510043   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510053   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510086   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510468   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510546   14037 config.go:182] Loaded profile config "addons-377932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:30:26.510566   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510596   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510882   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510903   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510933   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510954   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510948   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.510988   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.511012   14037 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377932"
	I0725 17:30:26.509315   14037 addons.go:69] Setting ingress-dns=true in profile "addons-377932"
	I0725 17:30:26.511456   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.511563   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.510501   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.511467   14037 addons.go:234] Setting addon ingress-dns=true in "addons-377932"
	I0725 17:30:26.509342   14037 addons.go:234] Setting addon volcano=true in "addons-377932"
	I0725 17:30:26.512027   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512044   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.512083   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510548   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.509292   14037 addons.go:234] Setting addon metrics-server=true in "addons-377932"
	I0725 17:30:26.509302   14037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-377932"
	I0725 17:30:26.509919   14037 addons.go:234] Setting addon inspektor-gadget=true in "addons-377932"
	I0725 17:30:26.512379   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.509919   14037 addons.go:234] Setting addon volumesnapshots=true in "addons-377932"
	I0725 17:30:26.512564   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.510501   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512745   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512794   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512747   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512877   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.509257   14037 addons.go:234] Setting addon registry=true in "addons-377932"
	I0725 17:30:26.509189   14037 addons.go:69] Setting storage-provisioner=true in profile "addons-377932"
	I0725 17:30:26.512891   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.512930   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.512990   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513039   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513064   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513075   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513090   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.513086   14037 addons.go:234] Setting addon storage-provisioner=true in "addons-377932"
	I0725 17:30:26.513340   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.513848   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.513884   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.530410   14037 out.go:177] * Verifying Kubernetes components...
	I0725 17:30:26.530449   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.531156   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.531413   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.531451   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.531597   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.531615   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.532432   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0725 17:30:26.532715   14037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:30:26.534701   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0725 17:30:26.534712   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.534797   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0725 17:30:26.540880   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0725 17:30:26.541339   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.541369   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.541462   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.542056   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.542239   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.542259   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.542947   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.542999   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.543491   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.543552   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0725 17:30:26.543713   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.543889   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.544367   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.544387   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.544712   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.548030   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0725 17:30:26.550617   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0725 17:30:26.551058   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.553261   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I0725 17:30:26.553836   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44543
	I0725 17:30:26.558597   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0725 17:30:26.559513   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.565376   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0725 17:30:26.566084   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.566120   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.566426   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566665   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.566679   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.566725   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566747   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566790   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566820   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.566928   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.566937   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.568064   14037 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-377932"
	I0725 17:30:26.568115   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.568515   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.568545   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.569360   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.569439   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569457   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569519   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569532   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569589   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569602   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569652   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569663   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.569706   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.569715   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.569716   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.570161   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570226   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570226   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570261   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.570292   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570527   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570568   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570730   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570756   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.570834   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.570869   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.571512   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.571597   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.571664   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0725 17:30:26.572127   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.572167   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.572764   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.572801   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.579403   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.579453   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.579487   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0725 17:30:26.579601   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.579827   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.580037   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.580051   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.580163   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.580173   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.580514   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.580527   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.580738   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.581186   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.581208   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.581791   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.581807   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.581996   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.582355   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.582387   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.582417   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.582636   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.583052   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.583072   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.589530   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0725 17:30:26.590052   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.590614   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:26.590723   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.590746   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.591104   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.591256   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.593066   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.593414   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0725 17:30:26.595378   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0725 17:30:26.596468   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:26.597772   14037 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 17:30:26.597795   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0725 17:30:26.597815   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.597885   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0725 17:30:26.599052   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0725 17:30:26.599763   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.600067   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0725 17:30:26.600503   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.600519   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.601077   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.601194   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0725 17:30:26.601334   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.601613   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.601701   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.601716   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.602029   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.602064   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.602241   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.602256   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0725 17:30:26.602263   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.602246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.602469   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.602574   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.602665   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.602719   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.602963   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.604251   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0725 17:30:26.604663   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.605056   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:26.605072   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:26.605217   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:26.605229   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:26.605238   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:26.605247   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:26.605248   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:26.605390   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:26.605404   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	W0725 17:30:26.605475   14037 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0725 17:30:26.606242   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0725 17:30:26.607241   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0725 17:30:26.608265   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0725 17:30:26.609200   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0725 17:30:26.609214   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0725 17:30:26.609234   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.609498   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
	I0725 17:30:26.609521   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0725 17:30:26.610014   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.610124   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.610654   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.610672   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.611055   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.611280   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.612285   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.612302   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.612393   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.612412   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.612431   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.612892   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.613119   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.613177   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.613401   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.613632   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.613902   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.614397   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I0725 17:30:26.614560   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.614860   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.615495   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.615513   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.616164   14037 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0725 17:30:26.616621   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.617258   14037 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 17:30:26.617272   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0725 17:30:26.617285   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.617610   14037 addons.go:234] Setting addon default-storageclass=true in "addons-377932"
	I0725 17:30:26.617656   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:26.618031   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.618055   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.618344   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.620100   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0725 17:30:26.620575   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.620673   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.620755   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.621015   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.621022   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.621038   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.621506   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.621524   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.621760   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.621917   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.621933   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.622214   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.622232   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.622939   14037 out.go:177]   - Using image docker.io/registry:2.8.3
	I0725 17:30:26.624062   14037 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0725 17:30:26.625203   14037 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0725 17:30:26.625222   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0725 17:30:26.625238   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.626513   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0725 17:30:26.627340   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.628401   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.628425   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.628797   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.628892   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.629239   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.629326   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.629637   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.629663   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.629872   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.630042   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.630211   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.630353   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.632272   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0725 17:30:26.632843   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.633333   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.633349   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.633723   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.634279   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.634314   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.636459   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0725 17:30:26.636857   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.637702   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.637724   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.638019   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.638523   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.638556   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.645196   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41245
	I0725 17:30:26.645810   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.646332   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.646351   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.646658   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.646825   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.648589   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.650609   14037 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0725 17:30:26.651568   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0725 17:30:26.651998   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.652079   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0725 17:30:26.652089   14037 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0725 17:30:26.652107   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.652632   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.652648   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.653210   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.653401   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.655783   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.656034   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0725 17:30:26.656160   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.656533   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.656754   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.656783   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.656977   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.657184   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.657197   14037 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0725 17:30:26.657204   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.657215   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.657421   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.657532   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.657582   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.658233   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:26.658273   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:26.658521   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 17:30:26.658543   14037 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 17:30:26.658560   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.661187   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0725 17:30:26.662231   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.662522   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.662629   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.662644   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.662677   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.663275   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.663293   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.663354   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.663742   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.663811   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.664054   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.664103   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0725 17:30:26.664109   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.664317   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
	I0725 17:30:26.664463   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.665617   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.665967   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.666653   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.666670   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.667075   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.667368   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.667888   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0725 17:30:26.667911   14037 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0725 17:30:26.668359   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.668799   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.668822   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.669242   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.669290   14037 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 17:30:26.669308   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0725 17:30:26.669328   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.669294   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.669442   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.670265   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.670286   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.670642   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.670802   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.671021   14037 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0725 17:30:26.672215   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0725 17:30:26.672231   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0725 17:30:26.672246   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.672245   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.673555   14037 out.go:177]   - Using image docker.io/busybox:stable
	I0725 17:30:26.674819   14037 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0725 17:30:26.675733   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.675800   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32979
	I0725 17:30:26.675945   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0725 17:30:26.676170   14037 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 17:30:26.676188   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0725 17:30:26.676204   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.676240   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.676338   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.676658   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.676677   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.676760   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.676983   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36401
	I0725 17:30:26.677381   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.677524   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.677604   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.677736   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.677749   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.678332   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.678589   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.678972   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.679102   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.679347   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.679393   14037 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:30:26.679404   14037 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:30:26.679418   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.679477   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.679493   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.679600   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.679621   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.679766   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.680105   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.680252   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.680400   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.680488   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.680512   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.680911   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.680961   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.680980   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.681044   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.681208   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.681217   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.681283   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.681323   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.681369   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.681593   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.681764   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.682166   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.682315   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.682549   14037 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0725 17:30:26.682678   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.683084   14037 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0725 17:30:26.683080   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.683526   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.683296   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.683751   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.683906   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0725 17:30:26.683919   14037 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0725 17:30:26.683928   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.683932   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.683963   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.684136   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.684668   14037 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0725 17:30:26.684686   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0725 17:30:26.684701   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.685482   14037 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0725 17:30:26.686582   14037 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0725 17:30:26.686597   14037 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0725 17:30:26.686615   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.687408   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688142   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.688161   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688211   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688385   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.688676   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.688674   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.688729   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.688834   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.688984   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.688985   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.689138   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.689254   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.689384   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.689493   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.689814   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.689836   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.690029   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.690188   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.690321   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.690462   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	W0725 17:30:26.693129   14037 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47776->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.693157   14037 retry.go:31] will retry after 359.642328ms: ssh: handshake failed: read tcp 192.168.39.1:47776->192.168.39.150:22: read: connection reset by peer
	W0725 17:30:26.693221   14037 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47780->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.693234   14037 retry.go:31] will retry after 239.250865ms: ssh: handshake failed: read tcp 192.168.39.1:47780->192.168.39.150:22: read: connection reset by peer
	I0725 17:30:26.695154   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0725 17:30:26.695511   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:26.696000   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:26.696018   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:26.696297   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:26.696542   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:26.697773   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:26.699516   14037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:30:26.700751   14037 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:30:26.700766   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:30:26.700783   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:26.703483   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.703880   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:26.703899   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:26.704053   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:26.704223   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:26.704372   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:26.704490   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:26.856420   14037 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:30:26.856853   14037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:30:26.942739   14037 node_ready.go:35] waiting up to 6m0s for node "addons-377932" to be "Ready" ...
	I0725 17:30:26.950391   14037 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0725 17:30:26.950418   14037 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0725 17:30:26.991959   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:30:26.994415   14037 node_ready.go:49] node "addons-377932" has status "Ready":"True"
	I0725 17:30:26.994435   14037 node_ready.go:38] duration metric: took 51.673222ms for node "addons-377932" to be "Ready" ...
	I0725 17:30:26.994445   14037 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:30:27.024358   14037 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.027758   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0725 17:30:27.027777   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0725 17:30:27.059861   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 17:30:27.073551   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 17:30:27.081510   14037 pod_ready.go:92] pod "etcd-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.081529   14037 pod_ready.go:81] duration metric: took 57.142659ms for pod "etcd-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.081538   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.082937   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:30:27.094301   14037 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0725 17:30:27.094330   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0725 17:30:27.109948   14037 pod_ready.go:92] pod "kube-apiserver-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.109965   14037 pod_ready.go:81] duration metric: took 28.42115ms for pod "kube-apiserver-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.109975   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.127627   14037 pod_ready.go:92] pod "kube-controller-manager-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:27.127647   14037 pod_ready.go:81] duration metric: took 17.665924ms for pod "kube-controller-manager-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.127656   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lvfsq" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:27.218026   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 17:30:27.223844   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 17:30:27.223862   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0725 17:30:27.277644   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0725 17:30:27.277666   14037 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0725 17:30:27.280873   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0725 17:30:27.290738   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0725 17:30:27.290763   14037 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0725 17:30:27.299376   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 17:30:27.321019   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0725 17:30:27.321046   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0725 17:30:27.330013   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0725 17:30:27.330032   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0725 17:30:27.380008   14037 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0725 17:30:27.380029   14037 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0725 17:30:27.398574   14037 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0725 17:30:27.398600   14037 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0725 17:30:27.457196   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0725 17:30:27.457223   14037 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0725 17:30:27.458617   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 17:30:27.458639   14037 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 17:30:27.467756   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0725 17:30:27.467777   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0725 17:30:27.472690   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0725 17:30:27.472717   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0725 17:30:27.521353   14037 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0725 17:30:27.521375   14037 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0725 17:30:27.568449   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0725 17:30:27.585481   14037 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0725 17:30:27.585508   14037 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0725 17:30:27.587271   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0725 17:30:27.587292   14037 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0725 17:30:27.593926   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0725 17:30:27.629117   14037 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:30:27.629136   14037 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 17:30:27.656673   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0725 17:30:27.656699   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0725 17:30:27.700184   14037 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0725 17:30:27.700207   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0725 17:30:27.736208   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0725 17:30:27.736235   14037 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0725 17:30:27.751771   14037 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0725 17:30:27.751797   14037 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0725 17:30:27.818829   14037 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0725 17:30:27.818852   14037 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0725 17:30:27.879044   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 17:30:27.905665   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0725 17:30:27.905690   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0725 17:30:27.967943   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0725 17:30:27.983486   14037 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:27.983509   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0725 17:30:28.042366   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0725 17:30:28.042397   14037 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0725 17:30:28.066816   14037 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0725 17:30:28.066851   14037 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0725 17:30:28.158008   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:28.209655   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0725 17:30:28.209680   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0725 17:30:28.265735   14037 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0725 17:30:28.265761   14037 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0725 17:30:28.496052   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0725 17:30:28.496078   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0725 17:30:28.542715   14037 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0725 17:30:28.542740   14037 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0725 17:30:28.681084   14037 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.824193331s)
	I0725 17:30:28.681114   14037 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 17:30:28.681119   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.68913118s)
	I0725 17:30:28.681163   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681179   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681202   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.62131223s)
	I0725 17:30:28.681258   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681276   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681443   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681456   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.681465   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681472   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681572   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.681611   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681628   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.681645   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.681667   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.681733   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.681744   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.682202   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.682212   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.682221   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.708827   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:28.708849   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:28.709134   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:28.709174   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:28.709183   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:28.728193   14037 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 17:30:28.728218   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0725 17:30:28.918382   14037 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 17:30:28.918408   14037 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0725 17:30:28.935308   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 17:30:29.132975   14037 pod_ready.go:102] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"False"
	I0725 17:30:29.137578   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 17:30:29.185384   14037 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377932" context rescaled to 1 replicas
	I0725 17:30:31.191287   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.11769951s)
	I0725 17:30:31.191359   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.191372   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.191710   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:31.191728   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.191744   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.191759   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.191772   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.192087   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.192103   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.203430   14037 pod_ready.go:102] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"False"
	I0725 17:30:31.205948   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.122986006s)
	I0725 17:30:31.205986   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.205999   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.206224   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.206238   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.206248   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.206256   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.206541   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.206561   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.320171   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:31.320196   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:31.320475   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:31.320493   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:31.779423   14037 pod_ready.go:92] pod "kube-proxy-lvfsq" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:31.779444   14037 pod_ready.go:81] duration metric: took 4.651781743s for pod "kube-proxy-lvfsq" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.779453   14037 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.881899   14037 pod_ready.go:92] pod "kube-scheduler-addons-377932" in "kube-system" namespace has status "Ready":"True"
	I0725 17:30:31.881925   14037 pod_ready.go:81] duration metric: took 102.463485ms for pod "kube-scheduler-addons-377932" in "kube-system" namespace to be "Ready" ...
	I0725 17:30:31.881937   14037 pod_ready.go:38] duration metric: took 4.887481521s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:30:31.881955   14037 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:30:31.882010   14037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:30:33.678748   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0725 17:30:33.678785   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:33.682101   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.682513   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:33.682539   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.682761   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:33.683112   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:33.683325   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:33.683500   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:33.849914   14037 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0725 17:30:33.899631   14037 addons.go:234] Setting addon gcp-auth=true in "addons-377932"
	I0725 17:30:33.899687   14037 host.go:66] Checking if "addons-377932" exists ...
	I0725 17:30:33.899995   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:33.900023   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:33.915048   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38837
	I0725 17:30:33.915478   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:33.915949   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:33.915967   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:33.916283   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:33.916955   14037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:30:33.917027   14037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:30:33.931543   14037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0725 17:30:33.931997   14037 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:30:33.932485   14037 main.go:141] libmachine: Using API Version  1
	I0725 17:30:33.932508   14037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:30:33.932821   14037 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:30:33.932978   14037 main.go:141] libmachine: (addons-377932) Calling .GetState
	I0725 17:30:33.934511   14037 main.go:141] libmachine: (addons-377932) Calling .DriverName
	I0725 17:30:33.934736   14037 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0725 17:30:33.934758   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHHostname
	I0725 17:30:33.937508   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.937905   14037 main.go:141] libmachine: (addons-377932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:a8:62", ip: ""} in network mk-addons-377932: {Iface:virbr1 ExpiryTime:2024-07-25 18:29:50 +0000 UTC Type:0 Mac:52:54:00:b4:a8:62 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-377932 Clientid:01:52:54:00:b4:a8:62}
	I0725 17:30:33.937930   14037 main.go:141] libmachine: (addons-377932) DBG | domain addons-377932 has defined IP address 192.168.39.150 and MAC address 52:54:00:b4:a8:62 in network mk-addons-377932
	I0725 17:30:33.938071   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHPort
	I0725 17:30:33.938222   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHKeyPath
	I0725 17:30:33.938363   14037 main.go:141] libmachine: (addons-377932) Calling .GetSSHUsername
	I0725 17:30:33.938550   14037 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/addons-377932/id_rsa Username:docker}
	I0725 17:30:34.798399   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.580329833s)
	I0725 17:30:34.798450   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798454   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.517552437s)
	I0725 17:30:34.798494   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798516   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798463   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798499   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.499101707s)
	I0725 17:30:34.798938   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.798954   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.798977   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.20503125s)
	I0725 17:30:34.798884   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.230390546s)
	I0725 17:30:34.799006   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799015   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799019   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799038   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799528   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.920445673s)
	I0725 17:30:34.799565   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799581   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.799916   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.831938722s)
	I0725 17:30:34.799939   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.799955   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.800133   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.642087812s)
	W0725 17:30:34.800166   14037 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 17:30:34.800186   14037 retry.go:31] will retry after 317.586915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 17:30:34.800290   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.864940169s)
	I0725 17:30:34.800305   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.800342   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801588   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801635   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801649   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801664   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801664   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801675   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801685   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801692   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801699   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801705   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801740   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801751   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801759   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801766   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801773   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801819   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801839   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801846   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801853   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801865   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801896   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.801904   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.801955   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.801985   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.801993   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.802024   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.802033   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.802303   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.802343   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.802357   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.801685   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803047   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803066   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803102   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803109   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803118   14037 addons.go:475] Verifying addon ingress=true in "addons-377932"
	I0725 17:30:34.803154   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803175   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803250   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803260   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803268   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.803275   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.803190   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803177   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803204   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803749   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803764   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:34.803773   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:34.803796   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.803829   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.803835   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.803987   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.804032   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.804041   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.804052   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.804065   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.804383   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.804755   14037 out.go:177] * Verifying ingress addon...
	I0725 17:30:34.805513   14037 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377932 service yakd-dashboard -n yakd-dashboard
	
	I0725 17:30:34.806791   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.806835   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.806851   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.806859   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:34.806859   14037 addons.go:475] Verifying addon registry=true in "addons-377932"
	I0725 17:30:34.806902   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:34.806910   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:34.806917   14037 addons.go:475] Verifying addon metrics-server=true in "addons-377932"
	I0725 17:30:34.807740   14037 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0725 17:30:34.809187   14037 out.go:177] * Verifying registry addon...
	I0725 17:30:34.811567   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0725 17:30:34.833370   14037 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0725 17:30:34.833399   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:34.839153   14037 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0725 17:30:34.839173   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.118558   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 17:30:35.348369   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.348538   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:35.695836   14037 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.813802254s)
	I0725 17:30:35.695872   14037 api_server.go:72] duration metric: took 9.186876551s to wait for apiserver process to appear ...
	I0725 17:30:35.695881   14037 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:30:35.695896   14037 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.761146615s)
	I0725 17:30:35.695902   14037 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0725 17:30:35.695839   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.558218438s)
	I0725 17:30:35.696010   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:35.696033   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:35.696367   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:35.696464   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:35.696547   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:35.696561   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:35.696584   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:35.696798   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:35.696816   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:35.696828   14037 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-377932"
	I0725 17:30:35.697694   14037 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0725 17:30:35.698681   14037 out.go:177] * Verifying csi-hostpath-driver addon...
	I0725 17:30:35.700171   14037 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 17:30:35.700856   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0725 17:30:35.701234   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0725 17:30:35.701285   14037 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0725 17:30:35.708636   14037 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0725 17:30:35.709948   14037 api_server.go:141] control plane version: v1.30.3
	I0725 17:30:35.709967   14037 api_server.go:131] duration metric: took 14.080783ms to wait for apiserver health ...
	I0725 17:30:35.709976   14037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:30:35.763090   14037 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0725 17:30:35.763114   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:35.766982   14037 system_pods.go:59] 19 kube-system pods found
	I0725 17:30:35.767015   14037 system_pods.go:61] "coredns-7db6d8ff4d-88xvs" [7b1bde6a-0813-443b-9380-b00b7d28e60b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.767022   14037 system_pods.go:61] "coredns-7db6d8ff4d-d9w47" [bdce9c77-c60e-470b-bcf9-92bc0457b00c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.767032   14037 system_pods.go:61] "csi-hostpath-attacher-0" [1dc5f394-e7fe-42cc-837c-dcc2bc950f3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 17:30:35.767036   14037 system_pods.go:61] "csi-hostpath-resizer-0" [5690ce6b-1620-4e7b-a4c2-ba55aa2719d5] Pending
	I0725 17:30:35.767045   14037 system_pods.go:61] "csi-hostpathplugin-sp25x" [fc9e8e5b-9eea-48b0-ab93-a41dd47ba51b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 17:30:35.767049   14037 system_pods.go:61] "etcd-addons-377932" [cb332b46-cc93-4dac-b792-7af6ecb19e19] Running
	I0725 17:30:35.767055   14037 system_pods.go:61] "kube-apiserver-addons-377932" [a89d3695-faba-4fd1-8d6e-44636c441dd3] Running
	I0725 17:30:35.767058   14037 system_pods.go:61] "kube-controller-manager-addons-377932" [25b60c94-0c25-420b-bab2-85da901959c6] Running
	I0725 17:30:35.767063   14037 system_pods.go:61] "kube-ingress-dns-minikube" [edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0725 17:30:35.767067   14037 system_pods.go:61] "kube-proxy-lvfsq" [064711fa-5c88-45bd-9b18-e748ebeae659] Running
	I0725 17:30:35.767070   14037 system_pods.go:61] "kube-scheduler-addons-377932" [791f79f6-b25a-46df-8b0e-ac3a1aeeb699] Running
	I0725 17:30:35.767075   14037 system_pods.go:61] "metrics-server-c59844bb4-nn7lw" [4b69ce7d-1c27-46dc-8f29-5bab086365eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:30:35.767082   14037 system_pods.go:61] "nvidia-device-plugin-daemonset-g4wdw" [33f0f28c-f9cb-4e40-8b85-364dac249c2b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0725 17:30:35.767098   14037 system_pods.go:61] "registry-656c9c8d9c-rkw7r" [c0a7b843-4a5e-4647-b7cb-7dd968ac91e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 17:30:35.767107   14037 system_pods.go:61] "registry-proxy-d8vdg" [83703257-9ba2-4749-b11e-965f7b8f4403] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 17:30:35.767114   14037 system_pods.go:61] "snapshot-controller-745499f584-4nzhc" [10ddb74f-e7a9-4a1a-a18c-a81520d43966] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.767121   14037 system_pods.go:61] "snapshot-controller-745499f584-vdmrk" [7268b907-7d32-4b96-a2fd-7866d0ef5bc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.767124   14037 system_pods.go:61] "storage-provisioner" [9e60203d-a803-41b0-9d64-802cd79cf088] Running
	I0725 17:30:35.767129   14037 system_pods.go:61] "tiller-deploy-6677d64bcd-gzwvc" [404a7d43-869c-4137-b5a9-e4f4ce531f65] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0725 17:30:35.767136   14037 system_pods.go:74] duration metric: took 57.154189ms to wait for pod list to return data ...
	I0725 17:30:35.767146   14037 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:30:35.776100   14037 default_sa.go:45] found service account: "default"
	I0725 17:30:35.776129   14037 default_sa.go:55] duration metric: took 8.976645ms for default service account to be created ...
	I0725 17:30:35.776143   14037 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:30:35.793249   14037 system_pods.go:86] 19 kube-system pods found
	I0725 17:30:35.793276   14037 system_pods.go:89] "coredns-7db6d8ff4d-88xvs" [7b1bde6a-0813-443b-9380-b00b7d28e60b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.793285   14037 system_pods.go:89] "coredns-7db6d8ff4d-d9w47" [bdce9c77-c60e-470b-bcf9-92bc0457b00c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 17:30:35.793292   14037 system_pods.go:89] "csi-hostpath-attacher-0" [1dc5f394-e7fe-42cc-837c-dcc2bc950f3c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 17:30:35.793299   14037 system_pods.go:89] "csi-hostpath-resizer-0" [5690ce6b-1620-4e7b-a4c2-ba55aa2719d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0725 17:30:35.793305   14037 system_pods.go:89] "csi-hostpathplugin-sp25x" [fc9e8e5b-9eea-48b0-ab93-a41dd47ba51b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 17:30:35.793311   14037 system_pods.go:89] "etcd-addons-377932" [cb332b46-cc93-4dac-b792-7af6ecb19e19] Running
	I0725 17:30:35.793316   14037 system_pods.go:89] "kube-apiserver-addons-377932" [a89d3695-faba-4fd1-8d6e-44636c441dd3] Running
	I0725 17:30:35.793322   14037 system_pods.go:89] "kube-controller-manager-addons-377932" [25b60c94-0c25-420b-bab2-85da901959c6] Running
	I0725 17:30:35.793331   14037 system_pods.go:89] "kube-ingress-dns-minikube" [edcd00dd-8c58-4d99-b700-6ea0bf5ee4eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0725 17:30:35.793337   14037 system_pods.go:89] "kube-proxy-lvfsq" [064711fa-5c88-45bd-9b18-e748ebeae659] Running
	I0725 17:30:35.793344   14037 system_pods.go:89] "kube-scheduler-addons-377932" [791f79f6-b25a-46df-8b0e-ac3a1aeeb699] Running
	I0725 17:30:35.793353   14037 system_pods.go:89] "metrics-server-c59844bb4-nn7lw" [4b69ce7d-1c27-46dc-8f29-5bab086365eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 17:30:35.793366   14037 system_pods.go:89] "nvidia-device-plugin-daemonset-g4wdw" [33f0f28c-f9cb-4e40-8b85-364dac249c2b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0725 17:30:35.793372   14037 system_pods.go:89] "registry-656c9c8d9c-rkw7r" [c0a7b843-4a5e-4647-b7cb-7dd968ac91e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 17:30:35.793381   14037 system_pods.go:89] "registry-proxy-d8vdg" [83703257-9ba2-4749-b11e-965f7b8f4403] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 17:30:35.793415   14037 system_pods.go:89] "snapshot-controller-745499f584-4nzhc" [10ddb74f-e7a9-4a1a-a18c-a81520d43966] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.793430   14037 system_pods.go:89] "snapshot-controller-745499f584-vdmrk" [7268b907-7d32-4b96-a2fd-7866d0ef5bc3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 17:30:35.793436   14037 system_pods.go:89] "storage-provisioner" [9e60203d-a803-41b0-9d64-802cd79cf088] Running
	I0725 17:30:35.793447   14037 system_pods.go:89] "tiller-deploy-6677d64bcd-gzwvc" [404a7d43-869c-4137-b5a9-e4f4ce531f65] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0725 17:30:35.793458   14037 system_pods.go:126] duration metric: took 17.30932ms to wait for k8s-apps to be running ...
	I0725 17:30:35.793470   14037 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:30:35.793514   14037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:30:35.820677   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:35.822463   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:35.858719   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0725 17:30:35.858746   14037 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0725 17:30:35.941419   14037 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 17:30:35.941448   14037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0725 17:30:35.996382   14037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 17:30:36.206578   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:36.312363   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:36.315590   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:36.707078   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:36.811961   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:36.815786   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:37.209856   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.091248009s)
	I0725 17:30:37.209912   14037 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.416371159s)
	I0725 17:30:37.209938   14037 system_svc.go:56] duration metric: took 1.416464158s WaitForService to wait for kubelet
	I0725 17:30:37.209952   14037 kubeadm.go:582] duration metric: took 10.700953135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:30:37.209977   14037 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.213559275s)
	I0725 17:30:37.209917   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.210002   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210004   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.209983   14037 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:30:37.210017   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210375   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.210398   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.210424   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.210441   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.210454   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.210464   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.210660   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.210713   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.210713   14037 main.go:141] libmachine: (addons-377932) DBG | Closing plugin on server side
	I0725 17:30:37.211803   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.211823   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.211838   14037 main.go:141] libmachine: Making call to close driver server
	I0725 17:30:37.211846   14037 main.go:141] libmachine: (addons-377932) Calling .Close
	I0725 17:30:37.212051   14037 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:30:37.212066   14037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:30:37.212412   14037 addons.go:475] Verifying addon gcp-auth=true in "addons-377932"
	I0725 17:30:37.214663   14037 out.go:177] * Verifying gcp-auth addon...
	I0725 17:30:37.216795   14037 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0725 17:30:37.243009   14037 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0725 17:30:37.243029   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:37.243796   14037 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:30:37.243826   14037 node_conditions.go:123] node cpu capacity is 2
	I0725 17:30:37.243841   14037 node_conditions.go:105] duration metric: took 33.82153ms to run NodePressure ...
	I0725 17:30:37.243856   14037 start.go:241] waiting for startup goroutines ...
	I0725 17:30:37.244242   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:37.320625   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:37.342538   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:37.707972   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:37.719826   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:37.812265   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:37.815699   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:38.206819   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:38.219960   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:38.313801   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:38.324559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:38.736980   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:38.737728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:38.811890   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:38.816631   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:39.207086   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:39.219616   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:39.312416   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:39.316569   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:39.708711   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:39.720666   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:39.812262   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:39.816415   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:40.205904   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:40.220486   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:40.312168   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:40.316140   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:40.706266   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:40.720254   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:40.811762   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:40.815166   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:41.206539   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:41.220492   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:41.312312   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:41.316452   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:41.707006   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:41.720428   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:41.811945   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:41.815559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:42.206882   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:42.220009   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:42.311963   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:42.318067   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:42.706796   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:42.720838   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:42.812600   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:42.815106   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:43.207783   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:43.220029   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:43.311718   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:43.315243   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:43.706349   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:43.720490   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:43.812170   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:43.815716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:44.207314   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:44.221629   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:44.312638   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:44.315795   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:44.707061   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:44.721185   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:44.811926   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:44.815586   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:45.206588   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:45.220553   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:45.312976   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:45.315941   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:45.706420   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:45.720459   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:45.811803   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:45.816123   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:46.206796   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:46.220632   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:46.312549   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:46.315722   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:46.707514   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:46.720843   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:46.813065   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:46.817743   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:47.206065   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:47.220219   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:47.311527   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:47.315270   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:47.708354   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:47.721037   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:47.811470   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:47.815720   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:48.206839   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:48.219835   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:48.313038   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:48.315599   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:48.706824   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:48.720200   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:48.811668   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:48.815131   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:49.205702   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:49.219986   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:49.311568   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:49.315456   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:49.706646   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:49.719697   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:49.812997   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:49.816380   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:50.206727   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:50.219967   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:50.311711   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:50.315196   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:50.707603   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:50.720103   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:50.812073   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:50.816435   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:51.206745   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:51.220252   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:51.312257   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:51.315943   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:51.710097   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:51.727851   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:51.812675   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:51.817856   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:52.207044   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:52.219897   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:52.312665   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:52.317452   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:52.706608   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:52.720869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:52.812127   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:52.815353   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:53.207219   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:53.219876   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:53.312543   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:53.315133   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:53.706611   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:53.721096   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:53.811616   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:53.814869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:54.206306   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:54.221532   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:54.311651   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:54.314961   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:54.706383   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:54.720590   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:54.812048   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:54.815460   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:55.206206   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:55.220169   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:55.311659   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:55.315730   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:55.707166   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:55.720215   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:55.812090   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:55.816701   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:56.206236   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:56.220234   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:56.316133   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:56.324665   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:56.707709   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:56.721098   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:56.811890   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:56.815732   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:57.205963   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:57.219903   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:57.312966   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:57.316415   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:57.706074   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:57.720280   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:57.812123   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:57.815466   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:58.206259   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:58.220454   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:58.312352   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:58.315684   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:58.707036   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:58.720180   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:58.811537   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:58.815672   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:59.206731   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:59.220823   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:59.312598   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:59.315024   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:30:59.706048   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:30:59.720100   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:30:59.811853   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:30:59.816290   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:00.206109   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:00.219876   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:00.312234   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:00.315637   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:00.708465   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:00.720954   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:00.812277   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:00.821816   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:01.209706   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:01.219697   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:01.311767   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:01.315728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:01.707729   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:01.719810   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:01.814221   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:01.817577   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:02.208802   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:02.219441   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:02.317514   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:02.317607   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:02.707118   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:02.720106   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:02.812355   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:02.817262   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.207339   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:03.220378   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:03.314117   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:03.316585   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.705898   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:03.719740   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:03.817167   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:03.817841   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.205666   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:04.219649   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:04.312153   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.316167   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:04.706210   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:04.720503   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:04.811872   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:04.815916   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:05.206469   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:05.220028   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:05.312364   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:05.315052   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:05.706560   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:05.720571   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:05.811945   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:05.815383   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.206601   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:06.219380   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:06.313343   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:06.333180   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.909655   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:06.910319   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:06.910653   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:06.910699   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.208814   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:07.222577   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:07.312511   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.316019   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:07.706837   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:07.719918   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:07.816013   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:07.821195   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 17:31:08.206209   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:08.220278   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:08.311978   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:08.315666   14037 kapi.go:107] duration metric: took 33.504098676s to wait for kubernetes.io/minikube-addons=registry ...
	I0725 17:31:08.707992   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:08.720442   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:08.812057   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:09.206578   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:09.220678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:09.312579   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:09.706135   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:09.720250   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:09.811671   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:10.206222   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:10.220368   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:10.312309   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:10.706495   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:10.720792   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:10.812696   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:11.206559   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:11.221003   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:11.312468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:11.706687   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:11.720985   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:11.812565   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:12.205946   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:12.220095   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:12.311570   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:12.706316   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:12.720385   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:12.811956   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:13.206263   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:13.220199   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:13.311857   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:13.707929   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:13.719947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:13.812799   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:14.206613   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:14.220817   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:14.312399   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:14.705678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:14.719868   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:14.812670   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:15.206299   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:15.220566   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:15.312340   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:15.707639   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:15.720862   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:15.812488   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:16.206648   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:16.220122   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:16.312402   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:16.706683   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:16.719650   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:16.812236   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:17.206438   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:17.221035   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:17.311614   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:17.711149   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:17.721716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:17.812357   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:18.219299   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:18.223925   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:18.312382   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:18.705817   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:18.720019   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:18.814450   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:19.206185   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:19.220756   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:19.312907   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:19.706939   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:19.720233   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:19.812031   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:20.206821   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:20.221042   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:20.313405   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:20.706376   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:20.720742   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:20.812972   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:21.207005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:21.219833   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:21.312298   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:21.706807   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:21.720757   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:21.813040   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:22.206634   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:22.220276   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:22.316518   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:22.708504   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:22.720463   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:22.813557   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:23.206692   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:23.221240   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:23.312487   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:23.707352   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:23.724066   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:23.812221   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:24.207062   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:24.220733   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:24.555951   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:24.707216   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:24.720292   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:24.812178   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:25.206124   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:25.220463   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:25.311928   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:25.706572   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:25.720700   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:25.812549   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:26.206621   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:26.219445   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:26.312490   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:26.708892   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:26.720011   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:26.813072   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:27.211617   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:27.220766   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:27.312285   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:27.707498   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:27.722611   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:27.812315   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:28.209273   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:28.220999   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:28.311550   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:28.706646   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:28.720682   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:28.813651   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:29.210902   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:29.222191   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:29.312166   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:29.709202   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:29.720947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:29.812735   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:30.206678   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:30.221433   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:30.313304   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:30.707637   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:30.723017   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:30.812213   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:31.211245   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:31.220189   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:31.311466   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:31.706704   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:31.719947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:31.812571   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:32.206963   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:32.220110   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:32.703982   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:32.713008   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:32.725735   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:32.815590   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:33.206137   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:33.219984   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:33.311468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:33.706835   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:33.719825   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:33.812236   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:34.205763   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:34.220144   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:34.311835   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:34.706688   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:34.720728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:34.812211   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:35.206024   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:35.220754   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:35.312504   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:35.705962   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:35.719996   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:35.811808   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:36.208186   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:36.219634   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:36.312667   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:36.716138   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:36.721785   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:36.816769   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:37.209998   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:37.222740   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:37.313629   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:37.706281   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:37.720781   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:37.812025   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:38.206812   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:38.220713   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:38.313175   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:38.706925   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:38.720092   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:38.811795   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:39.211283   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:39.226714   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:39.312552   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:39.710203   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:39.719943   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:39.812751   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:40.206005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:40.220610   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:40.312390   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:40.808487   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:40.813797   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:40.814468   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:41.206090   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:41.221008   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:41.311831   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:41.706728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:41.720669   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:41.813801   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:42.206187   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:42.220718   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:42.320241   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:42.707527   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:42.721830   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:42.812750   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:43.205885   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:43.220056   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:43.313715   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:43.705756   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:43.719741   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:43.812189   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:44.206668   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:44.219812   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:44.312611   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:44.706400   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:44.720987   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:44.812704   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:45.434667   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:45.435264   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:45.438921   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:45.706255   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:45.720549   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:45.812336   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:46.207327   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:46.219901   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:46.312842   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:46.705809   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:46.720260   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:46.811896   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:47.207318   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:47.220408   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:47.312497   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:47.707177   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:47.720889   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:47.813116   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:48.206728   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:48.221159   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:48.311934   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:48.712716   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:48.720448   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:48.813339   14037 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 17:31:49.207279   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:49.220869   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:49.312715   14037 kapi.go:107] duration metric: took 1m14.504972311s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0725 17:31:49.706894   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:49.720091   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:50.207511   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:50.224157   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:50.705899   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:50.720070   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:51.207404   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:51.220681   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:51.708289   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:51.722737   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:52.206005   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:52.221183   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:52.706759   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:52.720169   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 17:31:53.206356   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:53.226919   14037 kapi.go:107] duration metric: took 1m16.010122961s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0725 17:31:53.228898   14037 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-377932 cluster.
	I0725 17:31:53.230496   14037 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0725 17:31:53.231987   14037 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0725 17:31:53.714096   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:54.206152   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:54.707947   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:55.206743   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:55.707190   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:56.207973   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:56.706530   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:57.206751   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:57.706346   14037 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 17:31:58.205996   14037 kapi.go:107] duration metric: took 1m22.505136514s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0725 17:31:58.207793   14037 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, ingress-dns, helm-tiller, cloud-spanner, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0725 17:31:58.209159   14037 addons.go:510] duration metric: took 1m31.70011609s for enable addons: enabled=[nvidia-device-plugin default-storageclass storage-provisioner storage-provisioner-rancher inspektor-gadget ingress-dns helm-tiller cloud-spanner yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0725 17:31:58.209208   14037 start.go:246] waiting for cluster config update ...
	I0725 17:31:58.209230   14037 start.go:255] writing updated cluster config ...
	I0725 17:31:58.209488   14037 ssh_runner.go:195] Run: rm -f paused
	I0725 17:31:58.260030   14037 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 17:31:58.261629   14037 out.go:177] * Done! kubectl is now configured to use "addons-377932" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.424351292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929109424325315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=579ad910-1ad8-40b9-8c22-6ac8c07b80a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.424859262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0596054c-2309-48b3-a363-a13bf7ceec88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.424924132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0596054c-2309-48b3-a363-a13bf7ceec88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.425218718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes
.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1e
d4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State
:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17219286072532784
03,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0596054c-2309-48b3-a363-a13bf7ceec88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.468522901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a62589bd-e002-460c-8bf8-318421565899 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.468595859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a62589bd-e002-460c-8bf8-318421565899 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.470161373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a0e6b3c-719a-4747-b0a9-1282883f919d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.471495110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929109471468313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a0e6b3c-719a-4747-b0a9-1282883f919d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.472052529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b47e20a-c881-4f2f-b0ab-debad98d9e88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.472107932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b47e20a-c881-4f2f-b0ab-debad98d9e88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.472363030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes
.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1e
d4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State
:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17219286072532784
03,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b47e20a-c881-4f2f-b0ab-debad98d9e88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.507906965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10c483ac-bb1a-41b4-b841-fa99f091e867 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.507983775Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10c483ac-bb1a-41b4-b841-fa99f091e867 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.509203918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e4d2370-1e51-46be-9a68-20bffc6d2780 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.510406068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929109510374020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e4d2370-1e51-46be-9a68-20bffc6d2780 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.511162201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7abaf8e8-e82b-4680-8da6-c67a6739e498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.511218199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7abaf8e8-e82b-4680-8da6-c67a6739e498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.511456217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes
.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1e
d4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State
:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17219286072532784
03,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7abaf8e8-e82b-4680-8da6-c67a6739e498 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.543479610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05831f95-b703-4cc4-b51f-b8936b248f60 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.543591076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05831f95-b703-4cc4-b51f-b8936b248f60 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.545117048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31f7f5f8-f823-4eb6-904c-24461269ccc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.546473156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929109546438088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589534,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31f7f5f8-f823-4eb6-904c-24461269ccc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.547095684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16c3c077-d8a8-4a79-ae1b-0be00b467ce4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.547170534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16c3c077-d8a8-4a79-ae1b-0be00b467ce4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:38:29 addons-377932 crio[680]: time="2024-07-25 17:38:29.547468491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:605d9f89738857f73ecfcfeb9c6420117e0974b29ff89f26ea1df493a8406e56,PodSandboxId:406b53b6aea6b145e2e770c1256671904620656b6b6415a0a82d33f7fdf5d9a5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721928903760302707,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-8zkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f7916284-6975-4022-aa91-4a43f1c6e583,},Annotations:map[string]string{io.kubernetes.container.hash: 2ed8c40e,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c341f933ddbbc35053577b7ae12cf7787f214585a4dfa82be15d65ceb2b234,PodSandboxId:3f41aee5c6cd1526f311e2d8d93813b09d3b4969473529714c0a5f9bddaed408,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1721928764983862542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 749c6e52-1618-4a00-9ab0-3f73733eccb3,},Annotations:map[string]string{io.kubernet
es.container.hash: 5ba99726,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae50a4c55eb97c32079964aafe7bf01448d4d3c6e93e58d0b85618d92092309b,PodSandboxId:6a583c13e6c81b0e73f552197c300c8457030474923f0c6a7ba11ddb65d0ea2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721928721948786992,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d885e2b9-afbe-457a-8
610-eb1d724c9dd5,},Annotations:map[string]string{io.kubernetes.container.hash: 69ffc994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e028832ba2ea4cbd56864b959e53d89eb6f2ed40c67f3fd9af5f2f9904ab35,PodSandboxId:01bcc628f927c4229ea5f5b5d5f0b6099e2a844a8eed7c7e104975bddc6377a2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721928673046539755,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-nn7lw,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 4b69ce7d-1c27-46dc-8f29-5bab086365eb,},Annotations:map[string]string{io.kubernetes.container.hash: cf9e61a5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721928663778707543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe,PodSandboxId:d646775153f6b7a401b0a60cdfca40ecffc797d2d5758aaf272a7970fc17c916,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721928632863886449,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes
.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e60203d-a803-41b0-9d64-802cd79cf088,},Annotations:map[string]string{io.kubernetes.container.hash: 289c5ec7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b,PodSandboxId:b08ab3dbd8f75f07c54ae334c1660b9be440eef7c503a30efd23842910f4392e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721928631517353372,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-d9w47,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdce9c77-c60e-470b-bcf9-92bc0457b00c,},Annotations:map[string]string{io.kubernetes.container.hash: ffa22845,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61,PodSandboxId:dcf515c23ca90066fffcbfd53fc84ca925b3643a560df9b1259b9288f2513d45,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1e
d4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721928629712219890,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lvfsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064711fa-5c88-45bd-9b18-e748ebeae659,},Annotations:map[string]string{io.kubernetes.container.hash: 9d0508d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f,PodSandboxId:afb1a6e40a036738b703891508a305871507865c5f7acc14eb4be26efecc24e2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_RUNNING,CreatedAt:1721928607258165763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78bf4fc3df2079dd9bd7daba69db0ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc,PodSandboxId:9e2d5db7d289b6a0acb9d72594e5efe1b35f36c3f808754bcc9986477713abe3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State
:CONTAINER_RUNNING,CreatedAt:1721928607274483031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c33acae49de20875adafe8930587501,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d,PodSandboxId:a0c0bd8e8111e087275c8e71eebc5b610c5a03dc5f85e1af2abf2ea40465f359,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINE
R_RUNNING,CreatedAt:1721928607281682391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83f7a70e585999e6e7728526daf481b0,},Annotations:map[string]string{io.kubernetes.container.hash: f85058f0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa,PodSandboxId:c5b9fae7f4ee509cd9e027faa73d8bdbb7c00d260c94d229d68ae6775155c887,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:17219286072532784
03,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377932,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6f45d056b0a4be696cc4a8496bff9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 308547b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16c3c077-d8a8-4a79-ae1b-0be00b467ce4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	605d9f8973885       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   406b53b6aea6b       hello-world-app-6778b5fc9f-8zkzg
	54c341f933ddb       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   3f41aee5c6cd1       nginx
	ae50a4c55eb97       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   6a583c13e6c81       busybox
	96e028832ba2e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   01bcc628f927c       metrics-server-c59844bb4-nn7lw
	b60fb2bb6a1c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       1                   d646775153f6b       storage-provisioner
	cf4e20ecc3a7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Exited              storage-provisioner       0                   d646775153f6b       storage-provisioner
	f3bcedefced06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   b08ab3dbd8f75       coredns-7db6d8ff4d-d9w47
	383275aa3c4dc       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   dcf515c23ca90       kube-proxy-lvfsq
	74106d9dcdfc7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   a0c0bd8e8111e       etcd-addons-377932
	a6dbfcd8215ac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   9e2d5db7d289b       kube-controller-manager-addons-377932
	57d187294f4f9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   afb1a6e40a036       kube-scheduler-addons-377932
	cbe8d24934c77       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   c5b9fae7f4ee5       kube-apiserver-addons-377932
	
	
	==> coredns [f3bcedefced0621e7b9b7519fa1d73e3cebec052e20203ce99ec29feb67df47b] <==
	[INFO] 10.244.0.6:55901 - 27103 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167601s
	[INFO] 10.244.0.6:55554 - 46043 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112123s
	[INFO] 10.244.0.6:55554 - 18117 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000214134s
	[INFO] 10.244.0.6:55789 - 36728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176063s
	[INFO] 10.244.0.6:55789 - 25975 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000215718s
	[INFO] 10.244.0.6:43263 - 14339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000212073s
	[INFO] 10.244.0.6:43263 - 39681 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000205896s
	[INFO] 10.244.0.6:33956 - 63895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104753s
	[INFO] 10.244.0.6:33956 - 38802 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083122s
	[INFO] 10.244.0.6:40038 - 19046 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093401s
	[INFO] 10.244.0.6:40038 - 6756 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064036s
	[INFO] 10.244.0.6:36986 - 25565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031308s
	[INFO] 10.244.0.6:36986 - 23259 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046989s
	[INFO] 10.244.0.6:58056 - 39356 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009466s
	[INFO] 10.244.0.6:58056 - 14243 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000041422s
	[INFO] 10.244.0.22:43424 - 58041 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000400291s
	[INFO] 10.244.0.22:41194 - 1157 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142741s
	[INFO] 10.244.0.22:57305 - 32410 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100836s
	[INFO] 10.244.0.22:52169 - 65482 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097047s
	[INFO] 10.244.0.22:40228 - 47166 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104419s
	[INFO] 10.244.0.22:53036 - 19764 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143616s
	[INFO] 10.244.0.22:34328 - 24030 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000638244s
	[INFO] 10.244.0.22:35768 - 17654 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000902665s
	[INFO] 10.244.0.24:34710 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000380641s
	[INFO] 10.244.0.24:41806 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127074s
	
	
	==> describe nodes <==
	Name:               addons-377932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-377932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=addons-377932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_30_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377932
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:30:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377932
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:38:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:35:19 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:35:19 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:35:19 +0000   Thu, 25 Jul 2024 17:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:35:19 +0000   Thu, 25 Jul 2024 17:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    addons-377932
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1ad6651334941f8ab25b3dc98a618d4
	  System UUID:                a1ad6651-3349-41f8-ab25-b3dc98a618d4
	  Boot ID:                    25c5e1f3-b9c2-4564-b6e2-3d70d430654e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  default                     hello-world-app-6778b5fc9f-8zkzg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 coredns-7db6d8ff4d-d9w47                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m2s
	  kube-system                 etcd-addons-377932                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-apiserver-addons-377932             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-controller-manager-addons-377932    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-proxy-lvfsq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-addons-377932             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 metrics-server-c59844bb4-nn7lw           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m56s  kube-proxy       
	  Normal  Starting                 8m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s  kubelet          Node addons-377932 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s  kubelet          Node addons-377932 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s  kubelet          Node addons-377932 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m16s  kubelet          Node addons-377932 status is now: NodeReady
	  Normal  RegisteredNode           8m4s   node-controller  Node addons-377932 event: Registered Node addons-377932 in Controller
	
	
	==> dmesg <==
	[Jul25 17:31] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.555816] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.071696] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.737151] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.017391] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.066267] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.753077] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.060538] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.518415] kauditd_printk_skb: 7 callbacks suppressed
	[Jul25 17:32] kauditd_printk_skb: 45 callbacks suppressed
	[ +10.387617] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.768604] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.214825] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.369753] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.021562] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.027313] kauditd_printk_skb: 9 callbacks suppressed
	[Jul25 17:33] kauditd_printk_skb: 35 callbacks suppressed
	[ +20.972468] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.260723] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.556230] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.450480] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.510807] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.369221] kauditd_printk_skb: 13 callbacks suppressed
	[Jul25 17:35] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.610152] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [74106d9dcdfc7a2f06c3c370eb476841959e525b1b30fbfd2d47762bb38e671d] <==
	{"level":"warn","ts":"2024-07-25T17:31:32.690351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:31:32.298102Z","time spent":"392.233139ms","remote":"127.0.0.1:44736","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-nn7lw.17e585017caf86f8\" "}
	{"level":"info","ts":"2024-07-25T17:31:32.689886Z","caller":"traceutil/trace.go:171","msg":"trace[762310865] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-799879c74f-dcg7m; range_end:; response_count:1; response_revision:998; }","duration":"177.667092ms","start":"2024-07-25T17:31:32.512207Z","end":"2024-07-25T17:31:32.689874Z","steps":["trace[762310865] 'agreement among raft nodes before linearized reading'  (duration: 175.932822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:40.794264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.557994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85761"}
	{"level":"info","ts":"2024-07-25T17:31:40.794397Z","caller":"traceutil/trace.go:171","msg":"trace[1325571677] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1066; }","duration":"102.721451ms","start":"2024-07-25T17:31:40.691665Z","end":"2024-07-25T17:31:40.794386Z","steps":["trace[1325571677] 'range keys from in-memory index tree'  (duration: 102.383306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.415532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.092629ms","expected-duration":"100ms","prefix":"","request":"header:<ID:345529135170824684 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-377932\" mod_revision:1012 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-377932\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-377932\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T17:31:45.415683Z","caller":"traceutil/trace.go:171","msg":"trace[1704449633] linearizableReadLoop","detail":"{readStateIndex:1136; appliedIndex:1135; }","duration":"224.525723ms","start":"2024-07-25T17:31:45.191143Z","end":"2024-07-25T17:31:45.415669Z","steps":["trace[1704449633] 'read index received'  (duration: 100.150917ms)","trace[1704449633] 'applied index is now lower than readState.Index'  (duration: 124.373688ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T17:31:45.415962Z","caller":"traceutil/trace.go:171","msg":"trace[1417700302] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"484.086363ms","start":"2024-07-25T17:31:44.931859Z","end":"2024-07-25T17:31:45.415945Z","steps":["trace[1417700302] 'process raft request'  (duration: 359.485918ms)","trace[1417700302] 'compare'  (duration: 123.849162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T17:31:45.416184Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:31:44.93184Z","time spent":"484.280217ms","remote":"127.0.0.1:44916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-377932\" mod_revision:1012 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-377932\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-377932\" > >"}
	{"level":"warn","ts":"2024-07-25T17:31:45.416547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.395885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85761"}
	{"level":"info","ts":"2024-07-25T17:31:45.416618Z","caller":"traceutil/trace.go:171","msg":"trace[1547704006] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1100; }","duration":"225.487932ms","start":"2024-07-25T17:31:45.191119Z","end":"2024-07-25T17:31:45.416607Z","steps":["trace[1547704006] 'agreement among raft nodes before linearized reading'  (duration: 225.263023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.41741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.557174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-25T17:31:45.417492Z","caller":"traceutil/trace.go:171","msg":"trace[1303947353] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1100; }","duration":"120.017077ms","start":"2024-07-25T17:31:45.297464Z","end":"2024-07-25T17:31:45.417481Z","steps":["trace[1303947353] 'agreement among raft nodes before linearized reading'  (duration: 119.445139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.417683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.764401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-25T17:31:45.417743Z","caller":"traceutil/trace.go:171","msg":"trace[80852776] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1100; }","duration":"149.858176ms","start":"2024-07-25T17:31:45.267876Z","end":"2024-07-25T17:31:45.417735Z","steps":["trace[80852776] 'agreement among raft nodes before linearized reading'  (duration: 149.759907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:31:45.419212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.577773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-25T17:31:45.41935Z","caller":"traceutil/trace.go:171","msg":"trace[362820918] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1100; }","duration":"211.736581ms","start":"2024-07-25T17:31:45.207605Z","end":"2024-07-25T17:31:45.419341Z","steps":["trace[362820918] 'agreement among raft nodes before linearized reading'  (duration: 210.310741ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T17:32:28.600454Z","caller":"traceutil/trace.go:171","msg":"trace[605734207] linearizableReadLoop","detail":"{readStateIndex:1375; appliedIndex:1374; }","duration":"158.630197ms","start":"2024-07-25T17:32:28.441803Z","end":"2024-07-25T17:32:28.600433Z","steps":["trace[605734207] 'read index received'  (duration: 158.482248ms)","trace[605734207] 'applied index is now lower than readState.Index'  (duration: 147.445µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T17:32:28.600717Z","caller":"traceutil/trace.go:171","msg":"trace[2054451883] transaction","detail":"{read_only:false; response_revision:1328; number_of_response:1; }","duration":"236.40985ms","start":"2024-07-25T17:32:28.364295Z","end":"2024-07-25T17:32:28.600704Z","steps":["trace[2054451883] 'process raft request'  (duration: 236.045771ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.60092Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.086634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4330"}
	{"level":"info","ts":"2024-07-25T17:32:28.600945Z","caller":"traceutil/trace.go:171","msg":"trace[1867259601] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1328; }","duration":"159.159823ms","start":"2024-07-25T17:32:28.441778Z","end":"2024-07-25T17:32:28.600938Z","steps":["trace[1867259601] 'agreement among raft nodes before linearized reading'  (duration: 159.052961ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.601234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.966828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85995"}
	{"level":"info","ts":"2024-07-25T17:32:28.601257Z","caller":"traceutil/trace.go:171","msg":"trace[1953377226] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1328; }","duration":"156.013637ms","start":"2024-07-25T17:32:28.445237Z","end":"2024-07-25T17:32:28.601251Z","steps":["trace[1953377226] 'agreement among raft nodes before linearized reading'  (duration: 155.844376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:32:28.601574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.291158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85995"}
	{"level":"info","ts":"2024-07-25T17:32:28.601594Z","caller":"traceutil/trace.go:171","msg":"trace[2045294918] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1328; }","duration":"156.320505ms","start":"2024-07-25T17:32:28.445268Z","end":"2024-07-25T17:32:28.601588Z","steps":["trace[2045294918] 'agreement among raft nodes before linearized reading'  (duration: 156.194287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:33:02.578729Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:33:02.234918Z","time spent":"343.80702ms","remote":"127.0.0.1:44696","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> kernel <==
	 17:38:29 up 8 min,  0 users,  load average: 0.28, 0.51, 0.36
	Linux addons-377932 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cbe8d24934c776db5c242da943566ed443ef9a8453d1b9eb2574a622d2318cfa] <==
	E0725 17:32:22.326949       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0725 17:32:22.327586       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.47.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.47.217:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.97.47.217:443: connect: connection refused
	I0725 17:32:22.388151       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0725 17:32:39.879850       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0725 17:32:40.668189       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0725 17:32:40.855979       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.185.29"}
	W0725 17:32:40.935506       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0725 17:33:09.804713       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0725 17:33:12.987920       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0725 17:33:32.039138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.039213       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.071185       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.071574       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.090155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.090908       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.110467       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.112186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0725 17:33:32.137390       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0725 17:33:32.137441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0725 17:33:33.091136       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0725 17:33:33.138110       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0725 17:33:33.146608       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0725 17:33:41.017737       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.214.51"}
	I0725 17:35:00.984788       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.165.229"}
	
	
	==> kube-controller-manager [a6dbfcd8215ac5cf2a3ae2688eab72ea0ad3e16b46ba80fb0124143a8b0562fc] <==
	E0725 17:36:12.592108       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:36:15.646754       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:36:15.646793       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:36:34.846059       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:36:34.846271       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:36:47.805127       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:36:47.805193       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:36:59.281643       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:36:59.281690       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:37:08.016761       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:37:08.016812       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:37:22.481821       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:37:22.481855       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:37:33.455309       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:37:33.455348       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:37:51.482141       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:37:51.482205       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:37:53.385577       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:37:53.385621       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:38:06.352373       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:38:06.352429       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:38:12.962663       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:38:12.962805       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0725 17:38:25.555761       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0725 17:38:25.555815       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [383275aa3c4dc1009fe684f9fa0797717d9dbef15416e9af9ff583db4cd4ed61] <==
	I0725 17:30:32.953969       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:30:32.984434       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	I0725 17:30:33.130839       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:30:33.130913       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:30:33.130930       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:30:33.133102       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:30:33.133305       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:30:33.133317       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:30:33.134982       1 config.go:192] "Starting service config controller"
	I0725 17:30:33.135504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:30:33.135581       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:30:33.135598       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:30:33.138617       1 config.go:319] "Starting node config controller"
	I0725 17:30:33.138642       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:30:33.236583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:30:33.236674       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:30:33.239125       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [57d187294f4f9fa49570c5c4ae46c651198cf47b3c30b7c01b542af281cf6b3f] <==
	W0725 17:30:10.785757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:30:10.785836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:30:10.884326       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 17:30:10.884711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 17:30:10.991803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:30:10.991907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 17:30:11.043905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 17:30:11.044095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:30:11.165911       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 17:30:11.165940       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 17:30:11.207547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 17:30:11.207587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 17:30:11.229278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.229480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.259680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 17:30:11.259723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 17:30:11.335661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.335752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.368682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.368793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.379055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:30:11.379181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 17:30:11.397363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 17:30:11.397403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0725 17:30:12.998719       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 17:35:13 addons-377932 kubelet[1275]: E0725 17:35:13.029319    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:35:13 addons-377932 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:35:13 addons-377932 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:35:13 addons-377932 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:35:13 addons-377932 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:35:13 addons-377932 kubelet[1275]: I0725 17:35:13.467766    1275 scope.go:117] "RemoveContainer" containerID="6b81445e13cad64a3211c0ad0ce4b0e6d6e232e247547b68dc9ec46c436aa5b2"
	Jul 25 17:35:13 addons-377932 kubelet[1275]: I0725 17:35:13.482702    1275 scope.go:117] "RemoveContainer" containerID="6b550d33b6fccc0b31cc4b41a3806ec0bdb3bcf329c0dd7b236eefee161007d4"
	Jul 25 17:35:20 addons-377932 kubelet[1275]: I0725 17:35:20.012279    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 17:36:13 addons-377932 kubelet[1275]: E0725 17:36:13.027699    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:36:13 addons-377932 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:36:13 addons-377932 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:36:13 addons-377932 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:36:13 addons-377932 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:36:27 addons-377932 kubelet[1275]: I0725 17:36:27.014382    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 17:37:13 addons-377932 kubelet[1275]: E0725 17:37:13.026481    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:37:13 addons-377932 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:37:13 addons-377932 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:37:13 addons-377932 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:37:13 addons-377932 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:37:29 addons-377932 kubelet[1275]: I0725 17:37:29.013597    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 17:38:13 addons-377932 kubelet[1275]: E0725 17:38:13.025937    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:38:13 addons-377932 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:38:13 addons-377932 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:38:13 addons-377932 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:38:13 addons-377932 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [b60fb2bb6a1c7427e330a768b5c62d791b070a56ee96c61acc8e04670a5a8b14] <==
	I0725 17:31:03.952748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 17:31:03.972372       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 17:31:03.972424       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 17:31:03.982985       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 17:31:03.983820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96c9b105-9701-4bd5-be6c-ab3851c1b16b", APIVersion:"v1", ResourceVersion:"906", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e became leader
	I0725 17:31:03.983857       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e!
	I0725 17:31:04.084069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-377932_d098c3fa-8bf8-48ba-9c68-5904fb97e43e!
	
	
	==> storage-provisioner [cf4e20ecc3a7a1e5a06a26761db7253d5e088592844861951898a1f16f690cfe] <==
	I0725 17:30:33.434850       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 17:31:03.440771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-377932 -n addons-377932
helpers_test.go:261: (dbg) Run:  kubectl --context addons-377932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (367.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-377932
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-377932: exit status 82 (2m0.467709338s)

                                                
                                                
-- stdout --
	* Stopping node "addons-377932"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-377932" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-377932
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-377932: exit status 11 (21.689205622s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-377932" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-377932
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-377932: exit status 11 (6.144230095s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-377932" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-377932
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-377932: exit status 11 (6.142639285s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-377932" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 node stop m02 -v=7 --alsologtostderr
E0725 17:49:53.017687   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:50:33.978140   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.470541942s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174036-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:49:39.995511   27801 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:49:39.996075   27801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:49:39.996134   27801 out.go:304] Setting ErrFile to fd 2...
	I0725 17:49:39.996146   27801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:49:39.996650   27801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:49:39.997371   27801 mustload.go:65] Loading cluster: ha-174036
	I0725 17:49:39.997778   27801 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:49:39.997808   27801 stop.go:39] StopHost: ha-174036-m02
	I0725 17:49:39.998330   27801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:49:39.998411   27801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:49:40.014372   27801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I0725 17:49:40.014886   27801 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:49:40.015559   27801 main.go:141] libmachine: Using API Version  1
	I0725 17:49:40.015590   27801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:49:40.015995   27801 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:49:40.018449   27801 out.go:177] * Stopping node "ha-174036-m02"  ...
	I0725 17:49:40.019963   27801 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 17:49:40.020003   27801 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:49:40.020299   27801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 17:49:40.020340   27801 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:49:40.023790   27801 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:49:40.024289   27801 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:49:40.024353   27801 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:49:40.024632   27801 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:49:40.024874   27801 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:49:40.025087   27801 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:49:40.025252   27801 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:49:40.107213   27801 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 17:49:40.160512   27801 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 17:49:40.214604   27801 main.go:141] libmachine: Stopping "ha-174036-m02"...
	I0725 17:49:40.214639   27801 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:49:40.216667   27801 main.go:141] libmachine: (ha-174036-m02) Calling .Stop
	I0725 17:49:40.220741   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 0/120
	I0725 17:49:41.222638   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 1/120
	I0725 17:49:42.223992   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 2/120
	I0725 17:49:43.226132   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 3/120
	I0725 17:49:44.227307   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 4/120
	I0725 17:49:45.229621   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 5/120
	I0725 17:49:46.231131   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 6/120
	I0725 17:49:47.232720   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 7/120
	I0725 17:49:48.234928   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 8/120
	I0725 17:49:49.237324   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 9/120
	I0725 17:49:50.239424   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 10/120
	I0725 17:49:51.240887   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 11/120
	I0725 17:49:52.242191   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 12/120
	I0725 17:49:53.243685   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 13/120
	I0725 17:49:54.245154   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 14/120
	I0725 17:49:55.247425   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 15/120
	I0725 17:49:56.248788   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 16/120
	I0725 17:49:57.250807   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 17/120
	I0725 17:49:58.252174   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 18/120
	I0725 17:49:59.253563   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 19/120
	I0725 17:50:00.254928   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 20/120
	I0725 17:50:01.256690   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 21/120
	I0725 17:50:02.259179   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 22/120
	I0725 17:50:03.260530   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 23/120
	I0725 17:50:04.261786   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 24/120
	I0725 17:50:05.263494   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 25/120
	I0725 17:50:06.264764   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 26/120
	I0725 17:50:07.266916   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 27/120
	I0725 17:50:08.268688   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 28/120
	I0725 17:50:09.270317   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 29/120
	I0725 17:50:10.272231   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 30/120
	I0725 17:50:11.274240   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 31/120
	I0725 17:50:12.275888   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 32/120
	I0725 17:50:13.277702   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 33/120
	I0725 17:50:14.279422   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 34/120
	I0725 17:50:15.281382   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 35/120
	I0725 17:50:16.282834   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 36/120
	I0725 17:50:17.284262   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 37/120
	I0725 17:50:18.285886   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 38/120
	I0725 17:50:19.287278   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 39/120
	I0725 17:50:20.289571   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 40/120
	I0725 17:50:21.291605   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 41/120
	I0725 17:50:22.292980   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 42/120
	I0725 17:50:23.294653   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 43/120
	I0725 17:50:24.296500   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 44/120
	I0725 17:50:25.298489   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 45/120
	I0725 17:50:26.300118   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 46/120
	I0725 17:50:27.301422   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 47/120
	I0725 17:50:28.302759   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 48/120
	I0725 17:50:29.304243   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 49/120
	I0725 17:50:30.306522   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 50/120
	I0725 17:50:31.308420   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 51/120
	I0725 17:50:32.309869   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 52/120
	I0725 17:50:33.311719   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 53/120
	I0725 17:50:34.313829   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 54/120
	I0725 17:50:35.315446   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 55/120
	I0725 17:50:36.317050   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 56/120
	I0725 17:50:37.318358   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 57/120
	I0725 17:50:38.319666   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 58/120
	I0725 17:50:39.320976   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 59/120
	I0725 17:50:40.323109   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 60/120
	I0725 17:50:41.324481   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 61/120
	I0725 17:50:42.325892   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 62/120
	I0725 17:50:43.327542   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 63/120
	I0725 17:50:44.328954   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 64/120
	I0725 17:50:45.330583   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 65/120
	I0725 17:50:46.331992   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 66/120
	I0725 17:50:47.333375   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 67/120
	I0725 17:50:48.334712   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 68/120
	I0725 17:50:49.336588   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 69/120
	I0725 17:50:50.338784   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 70/120
	I0725 17:50:51.340232   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 71/120
	I0725 17:50:52.342086   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 72/120
	I0725 17:50:53.343777   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 73/120
	I0725 17:50:54.345381   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 74/120
	I0725 17:50:55.347359   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 75/120
	I0725 17:50:56.349102   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 76/120
	I0725 17:50:57.351161   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 77/120
	I0725 17:50:58.352536   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 78/120
	I0725 17:50:59.354913   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 79/120
	I0725 17:51:00.356445   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 80/120
	I0725 17:51:01.358672   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 81/120
	I0725 17:51:02.360011   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 82/120
	I0725 17:51:03.361307   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 83/120
	I0725 17:51:04.362693   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 84/120
	I0725 17:51:05.364292   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 85/120
	I0725 17:51:06.365651   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 86/120
	I0725 17:51:07.367195   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 87/120
	I0725 17:51:08.368886   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 88/120
	I0725 17:51:09.371023   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 89/120
	I0725 17:51:10.373453   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 90/120
	I0725 17:51:11.374931   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 91/120
	I0725 17:51:12.376475   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 92/120
	I0725 17:51:13.377577   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 93/120
	I0725 17:51:14.378932   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 94/120
	I0725 17:51:15.380963   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 95/120
	I0725 17:51:16.382655   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 96/120
	I0725 17:51:17.384711   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 97/120
	I0725 17:51:18.386788   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 98/120
	I0725 17:51:19.388150   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 99/120
	I0725 17:51:20.390154   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 100/120
	I0725 17:51:21.391411   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 101/120
	I0725 17:51:22.393643   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 102/120
	I0725 17:51:23.394919   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 103/120
	I0725 17:51:24.396896   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 104/120
	I0725 17:51:25.398914   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 105/120
	I0725 17:51:26.400538   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 106/120
	I0725 17:51:27.401857   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 107/120
	I0725 17:51:28.403475   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 108/120
	I0725 17:51:29.404825   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 109/120
	I0725 17:51:30.406706   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 110/120
	I0725 17:51:31.408439   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 111/120
	I0725 17:51:32.410496   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 112/120
	I0725 17:51:33.411842   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 113/120
	I0725 17:51:34.413136   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 114/120
	I0725 17:51:35.414904   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 115/120
	I0725 17:51:36.416333   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 116/120
	I0725 17:51:37.417936   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 117/120
	I0725 17:51:38.419115   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 118/120
	I0725 17:51:39.420960   27801 main.go:141] libmachine: (ha-174036-m02) Waiting for machine to stop 119/120
	I0725 17:51:40.421609   27801 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 17:51:40.421753   27801 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-174036 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
E0725 17:51:55.899164   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:51:58.590199   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (19.06641133s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:51:40.466430   28259 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:51:40.466552   28259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:51:40.466569   28259 out.go:304] Setting ErrFile to fd 2...
	I0725 17:51:40.466576   28259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:51:40.466730   28259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:51:40.466895   28259 out.go:298] Setting JSON to false
	I0725 17:51:40.466918   28259 mustload.go:65] Loading cluster: ha-174036
	I0725 17:51:40.467017   28259 notify.go:220] Checking for updates...
	I0725 17:51:40.467250   28259 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:51:40.467262   28259 status.go:255] checking status of ha-174036 ...
	I0725 17:51:40.467628   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.467667   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.484539   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0725 17:51:40.485197   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.485868   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.485914   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.486280   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.486606   28259 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:51:40.488227   28259 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:51:40.488247   28259 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:51:40.488595   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.488637   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.503010   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0725 17:51:40.503545   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.503992   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.504010   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.504315   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.504506   28259 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:51:40.507243   28259 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:51:40.507619   28259 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:51:40.507652   28259 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:51:40.507743   28259 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:51:40.508011   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.508040   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.522930   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0725 17:51:40.523414   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.523891   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.523907   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.524221   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.524487   28259 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:51:40.524709   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:51:40.524741   28259 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:51:40.527485   28259 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:51:40.527965   28259 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:51:40.527990   28259 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:51:40.528153   28259 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:51:40.528405   28259 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:51:40.528582   28259 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:51:40.528757   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:51:40.617096   28259 ssh_runner.go:195] Run: systemctl --version
	I0725 17:51:40.628183   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:51:40.649847   28259 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:51:40.649874   28259 api_server.go:166] Checking apiserver status ...
	I0725 17:51:40.649916   28259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:51:40.665885   28259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:51:40.683112   28259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:51:40.683166   28259 ssh_runner.go:195] Run: ls
	I0725 17:51:40.688587   28259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:51:40.694165   28259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:51:40.694190   28259 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:51:40.694202   28259 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:51:40.694232   28259 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:51:40.694546   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.694593   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.709478   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0725 17:51:40.709911   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.710446   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.710465   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.710784   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.711005   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:51:40.713045   28259 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:51:40.713064   28259 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:51:40.713481   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.713530   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.730063   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43397
	I0725 17:51:40.730481   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.731010   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.731029   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.731373   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.731594   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:51:40.733961   28259 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:51:40.734400   28259 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:51:40.734422   28259 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:51:40.734573   28259 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:51:40.734889   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:40.734920   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:40.748809   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42987
	I0725 17:51:40.749146   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:40.749651   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:40.749672   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:40.749962   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:40.750132   28259 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:51:40.750323   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:51:40.750346   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:51:40.753045   28259 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:51:40.753453   28259 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:51:40.753479   28259 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:51:40.753603   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:51:40.753838   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:51:40.753977   28259 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:51:40.754102   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:51:59.136561   28259 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:51:59.136659   28259 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:51:59.136677   28259 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:51:59.136684   28259 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:51:59.136702   28259 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:51:59.136714   28259 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:51:59.137129   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.137193   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.152173   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0725 17:51:59.152548   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.153058   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.153089   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.153442   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.153676   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:51:59.155255   28259 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:51:59.155269   28259 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:51:59.155558   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.155593   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.170700   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0725 17:51:59.171110   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.171540   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.171564   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.171880   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.172040   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:51:59.174805   28259 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:51:59.175192   28259 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:51:59.175229   28259 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:51:59.175364   28259 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:51:59.175754   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.175797   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.190598   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
	I0725 17:51:59.191005   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.191469   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.191487   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.191766   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.191936   28259 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:51:59.192101   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:51:59.192122   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:51:59.194916   28259 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:51:59.195439   28259 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:51:59.195464   28259 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:51:59.195632   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:51:59.195774   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:51:59.195901   28259 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:51:59.196006   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:51:59.276993   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:51:59.294552   28259 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:51:59.294583   28259 api_server.go:166] Checking apiserver status ...
	I0725 17:51:59.294617   28259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:51:59.308851   28259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:51:59.317727   28259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:51:59.317776   28259 ssh_runner.go:195] Run: ls
	I0725 17:51:59.324763   28259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:51:59.329022   28259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:51:59.329039   28259 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:51:59.329046   28259 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:51:59.329068   28259 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:51:59.329333   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.329367   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.346766   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0725 17:51:59.347143   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.347571   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.347596   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.347866   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.348048   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:51:59.349541   28259 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:51:59.349554   28259 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:51:59.349825   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.349857   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.364658   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0725 17:51:59.365053   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.365492   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.365515   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.365823   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.366025   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:51:59.368609   28259 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:51:59.369006   28259 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:51:59.369031   28259 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:51:59.369148   28259 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:51:59.369425   28259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:51:59.369461   28259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:51:59.384746   28259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0725 17:51:59.385140   28259 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:51:59.385673   28259 main.go:141] libmachine: Using API Version  1
	I0725 17:51:59.385707   28259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:51:59.385998   28259 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:51:59.386164   28259 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:51:59.386370   28259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:51:59.386402   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:51:59.389194   28259 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:51:59.389516   28259 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:51:59.389549   28259 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:51:59.389683   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:51:59.389866   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:51:59.389993   28259 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:51:59.390127   28259 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:51:59.472192   28259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:51:59.487667   28259 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174036 -n ha-174036
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174036 logs -n 25: (1.3633292s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m03_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m04 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp testdata/cp-test.txt                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m04_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03:/home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m03 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-174036 node stop m02 -v=7                                                    | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:45:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:45:00.348770   23738 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:45:00.348857   23738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:45:00.348865   23738 out.go:304] Setting ErrFile to fd 2...
	I0725 17:45:00.348869   23738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:45:00.349027   23738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:45:00.349539   23738 out.go:298] Setting JSON to false
	I0725 17:45:00.350312   23738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1644,"bootTime":1721927856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:45:00.350383   23738 start.go:139] virtualization: kvm guest
	I0725 17:45:00.352577   23738 out.go:177] * [ha-174036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:45:00.353919   23738 notify.go:220] Checking for updates...
	I0725 17:45:00.353961   23738 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:45:00.355138   23738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:45:00.356353   23738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:00.357757   23738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.358988   23738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:45:00.360117   23738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:45:00.361418   23738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:45:00.395042   23738 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 17:45:00.396396   23738 start.go:297] selected driver: kvm2
	I0725 17:45:00.396418   23738 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:45:00.396428   23738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:45:00.397096   23738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:45:00.397175   23738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:45:00.411464   23738 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:45:00.411507   23738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:45:00.411738   23738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:45:00.411765   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:00.411774   23738 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0725 17:45:00.411785   23738 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 17:45:00.411844   23738 start.go:340] cluster config:
	{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0725 17:45:00.411984   23738 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:45:00.413645   23738 out.go:177] * Starting "ha-174036" primary control-plane node in "ha-174036" cluster
	I0725 17:45:00.414740   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:00.414773   23738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:45:00.414785   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:45:00.414853   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:45:00.414865   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:45:00.415171   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:00.415193   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json: {Name:mk2194c9dd658db00a21b20213f9200952dd6688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:00.415337   23738 start.go:360] acquireMachinesLock for ha-174036: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:45:00.415370   23738 start.go:364] duration metric: took 17.988µs to acquireMachinesLock for "ha-174036"
	I0725 17:45:00.415384   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:00.415465   23738 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 17:45:00.416982   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:45:00.417113   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:00.417156   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:00.430633   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0725 17:45:00.431025   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:00.431524   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:00.431546   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:00.431886   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:00.432088   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:00.432255   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:00.432479   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:45:00.432513   23738 client.go:168] LocalClient.Create starting
	I0725 17:45:00.432565   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:45:00.432604   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:00.432622   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:00.432688   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:45:00.432708   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:00.432724   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:00.432741   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:45:00.432751   23738 main.go:141] libmachine: (ha-174036) Calling .PreCreateCheck
	I0725 17:45:00.433073   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:00.433475   23738 main.go:141] libmachine: Creating machine...
	I0725 17:45:00.433490   23738 main.go:141] libmachine: (ha-174036) Calling .Create
	I0725 17:45:00.433633   23738 main.go:141] libmachine: (ha-174036) Creating KVM machine...
	I0725 17:45:00.434996   23738 main.go:141] libmachine: (ha-174036) DBG | found existing default KVM network
	I0725 17:45:00.435642   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.435516   23761 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0725 17:45:00.435673   23738 main.go:141] libmachine: (ha-174036) DBG | created network xml: 
	I0725 17:45:00.435690   23738 main.go:141] libmachine: (ha-174036) DBG | <network>
	I0725 17:45:00.435769   23738 main.go:141] libmachine: (ha-174036) DBG |   <name>mk-ha-174036</name>
	I0725 17:45:00.435794   23738 main.go:141] libmachine: (ha-174036) DBG |   <dns enable='no'/>
	I0725 17:45:00.435807   23738 main.go:141] libmachine: (ha-174036) DBG |   
	I0725 17:45:00.435819   23738 main.go:141] libmachine: (ha-174036) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 17:45:00.435828   23738 main.go:141] libmachine: (ha-174036) DBG |     <dhcp>
	I0725 17:45:00.435837   23738 main.go:141] libmachine: (ha-174036) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 17:45:00.435843   23738 main.go:141] libmachine: (ha-174036) DBG |     </dhcp>
	I0725 17:45:00.435851   23738 main.go:141] libmachine: (ha-174036) DBG |   </ip>
	I0725 17:45:00.435864   23738 main.go:141] libmachine: (ha-174036) DBG |   
	I0725 17:45:00.435875   23738 main.go:141] libmachine: (ha-174036) DBG | </network>
	I0725 17:45:00.435895   23738 main.go:141] libmachine: (ha-174036) DBG | 
	I0725 17:45:00.441387   23738 main.go:141] libmachine: (ha-174036) DBG | trying to create private KVM network mk-ha-174036 192.168.39.0/24...
	I0725 17:45:00.505314   23738 main.go:141] libmachine: (ha-174036) DBG | private KVM network mk-ha-174036 192.168.39.0/24 created
	I0725 17:45:00.505386   23738 main.go:141] libmachine: (ha-174036) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 ...
	I0725 17:45:00.505412   23738 main.go:141] libmachine: (ha-174036) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:45:00.505455   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.505308   23761 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.505510   23738 main.go:141] libmachine: (ha-174036) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:45:00.744739   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.744575   23761 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa...
	I0725 17:45:00.989987   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.989829   23761 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/ha-174036.rawdisk...
	I0725 17:45:00.990015   23738 main.go:141] libmachine: (ha-174036) DBG | Writing magic tar header
	I0725 17:45:00.990030   23738 main.go:141] libmachine: (ha-174036) DBG | Writing SSH key tar header
	I0725 17:45:00.990043   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.989944   23761 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 ...
	I0725 17:45:00.990057   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036
	I0725 17:45:00.990083   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 (perms=drwx------)
	I0725 17:45:00.990091   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:45:00.990101   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.990107   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:45:00.990114   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:45:00.990130   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:45:00.990141   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:45:00.990225   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:45:00.990277   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:45:00.990286   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:45:00.990319   23738 main.go:141] libmachine: (ha-174036) Creating domain...
	I0725 17:45:00.990345   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:45:00.990364   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home
	I0725 17:45:00.990375   23738 main.go:141] libmachine: (ha-174036) DBG | Skipping /home - not owner
	I0725 17:45:00.991283   23738 main.go:141] libmachine: (ha-174036) define libvirt domain using xml: 
	I0725 17:45:00.991301   23738 main.go:141] libmachine: (ha-174036) <domain type='kvm'>
	I0725 17:45:00.991311   23738 main.go:141] libmachine: (ha-174036)   <name>ha-174036</name>
	I0725 17:45:00.991329   23738 main.go:141] libmachine: (ha-174036)   <memory unit='MiB'>2200</memory>
	I0725 17:45:00.991338   23738 main.go:141] libmachine: (ha-174036)   <vcpu>2</vcpu>
	I0725 17:45:00.991345   23738 main.go:141] libmachine: (ha-174036)   <features>
	I0725 17:45:00.991353   23738 main.go:141] libmachine: (ha-174036)     <acpi/>
	I0725 17:45:00.991367   23738 main.go:141] libmachine: (ha-174036)     <apic/>
	I0725 17:45:00.991375   23738 main.go:141] libmachine: (ha-174036)     <pae/>
	I0725 17:45:00.991386   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991391   23738 main.go:141] libmachine: (ha-174036)   </features>
	I0725 17:45:00.991395   23738 main.go:141] libmachine: (ha-174036)   <cpu mode='host-passthrough'>
	I0725 17:45:00.991400   23738 main.go:141] libmachine: (ha-174036)   
	I0725 17:45:00.991404   23738 main.go:141] libmachine: (ha-174036)   </cpu>
	I0725 17:45:00.991409   23738 main.go:141] libmachine: (ha-174036)   <os>
	I0725 17:45:00.991415   23738 main.go:141] libmachine: (ha-174036)     <type>hvm</type>
	I0725 17:45:00.991421   23738 main.go:141] libmachine: (ha-174036)     <boot dev='cdrom'/>
	I0725 17:45:00.991427   23738 main.go:141] libmachine: (ha-174036)     <boot dev='hd'/>
	I0725 17:45:00.991440   23738 main.go:141] libmachine: (ha-174036)     <bootmenu enable='no'/>
	I0725 17:45:00.991454   23738 main.go:141] libmachine: (ha-174036)   </os>
	I0725 17:45:00.991479   23738 main.go:141] libmachine: (ha-174036)   <devices>
	I0725 17:45:00.991498   23738 main.go:141] libmachine: (ha-174036)     <disk type='file' device='cdrom'>
	I0725 17:45:00.991512   23738 main.go:141] libmachine: (ha-174036)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/boot2docker.iso'/>
	I0725 17:45:00.991521   23738 main.go:141] libmachine: (ha-174036)       <target dev='hdc' bus='scsi'/>
	I0725 17:45:00.991531   23738 main.go:141] libmachine: (ha-174036)       <readonly/>
	I0725 17:45:00.991537   23738 main.go:141] libmachine: (ha-174036)     </disk>
	I0725 17:45:00.991556   23738 main.go:141] libmachine: (ha-174036)     <disk type='file' device='disk'>
	I0725 17:45:00.991570   23738 main.go:141] libmachine: (ha-174036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:45:00.991583   23738 main.go:141] libmachine: (ha-174036)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/ha-174036.rawdisk'/>
	I0725 17:45:00.991588   23738 main.go:141] libmachine: (ha-174036)       <target dev='hda' bus='virtio'/>
	I0725 17:45:00.991593   23738 main.go:141] libmachine: (ha-174036)     </disk>
	I0725 17:45:00.991597   23738 main.go:141] libmachine: (ha-174036)     <interface type='network'>
	I0725 17:45:00.991602   23738 main.go:141] libmachine: (ha-174036)       <source network='mk-ha-174036'/>
	I0725 17:45:00.991606   23738 main.go:141] libmachine: (ha-174036)       <model type='virtio'/>
	I0725 17:45:00.991611   23738 main.go:141] libmachine: (ha-174036)     </interface>
	I0725 17:45:00.991615   23738 main.go:141] libmachine: (ha-174036)     <interface type='network'>
	I0725 17:45:00.991620   23738 main.go:141] libmachine: (ha-174036)       <source network='default'/>
	I0725 17:45:00.991624   23738 main.go:141] libmachine: (ha-174036)       <model type='virtio'/>
	I0725 17:45:00.991651   23738 main.go:141] libmachine: (ha-174036)     </interface>
	I0725 17:45:00.991667   23738 main.go:141] libmachine: (ha-174036)     <serial type='pty'>
	I0725 17:45:00.991674   23738 main.go:141] libmachine: (ha-174036)       <target port='0'/>
	I0725 17:45:00.991678   23738 main.go:141] libmachine: (ha-174036)     </serial>
	I0725 17:45:00.991683   23738 main.go:141] libmachine: (ha-174036)     <console type='pty'>
	I0725 17:45:00.991687   23738 main.go:141] libmachine: (ha-174036)       <target type='serial' port='0'/>
	I0725 17:45:00.991695   23738 main.go:141] libmachine: (ha-174036)     </console>
	I0725 17:45:00.991699   23738 main.go:141] libmachine: (ha-174036)     <rng model='virtio'>
	I0725 17:45:00.991704   23738 main.go:141] libmachine: (ha-174036)       <backend model='random'>/dev/random</backend>
	I0725 17:45:00.991708   23738 main.go:141] libmachine: (ha-174036)     </rng>
	I0725 17:45:00.991712   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991716   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991721   23738 main.go:141] libmachine: (ha-174036)   </devices>
	I0725 17:45:00.991724   23738 main.go:141] libmachine: (ha-174036) </domain>
	I0725 17:45:00.991730   23738 main.go:141] libmachine: (ha-174036) 
	I0725 17:45:00.996216   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:49:b0:79 in network default
	I0725 17:45:00.996792   23738 main.go:141] libmachine: (ha-174036) Ensuring networks are active...
	I0725 17:45:00.996808   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:00.997409   23738 main.go:141] libmachine: (ha-174036) Ensuring network default is active
	I0725 17:45:00.997709   23738 main.go:141] libmachine: (ha-174036) Ensuring network mk-ha-174036 is active
	I0725 17:45:00.998094   23738 main.go:141] libmachine: (ha-174036) Getting domain xml...
	I0725 17:45:00.998683   23738 main.go:141] libmachine: (ha-174036) Creating domain...
	I0725 17:45:02.172283   23738 main.go:141] libmachine: (ha-174036) Waiting to get IP...
	I0725 17:45:02.172950   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.173296   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.173335   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.173277   23761 retry.go:31] will retry after 205.432801ms: waiting for machine to come up
	I0725 17:45:02.380899   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.381266   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.381313   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.381235   23761 retry.go:31] will retry after 287.651092ms: waiting for machine to come up
	I0725 17:45:02.670750   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.671046   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.671072   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.671001   23761 retry.go:31] will retry after 381.489127ms: waiting for machine to come up
	I0725 17:45:03.054449   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:03.054925   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:03.054951   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:03.054890   23761 retry.go:31] will retry after 590.979983ms: waiting for machine to come up
	I0725 17:45:03.647535   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:03.647896   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:03.647924   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:03.647815   23761 retry.go:31] will retry after 502.305492ms: waiting for machine to come up
	I0725 17:45:04.151385   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:04.151760   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:04.151788   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:04.151714   23761 retry.go:31] will retry after 653.566358ms: waiting for machine to come up
	I0725 17:45:04.806401   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:04.806814   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:04.806857   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:04.806780   23761 retry.go:31] will retry after 1.160094808s: waiting for machine to come up
	I0725 17:45:05.968613   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:05.969103   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:05.969127   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:05.969060   23761 retry.go:31] will retry after 1.254291954s: waiting for machine to come up
	I0725 17:45:07.225610   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:07.226094   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:07.226122   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:07.226028   23761 retry.go:31] will retry after 1.803882415s: waiting for machine to come up
	I0725 17:45:09.031955   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:09.032498   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:09.032525   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:09.032453   23761 retry.go:31] will retry after 1.590991223s: waiting for machine to come up
	I0725 17:45:10.625217   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:10.625590   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:10.625616   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:10.625545   23761 retry.go:31] will retry after 2.115148623s: waiting for machine to come up
	I0725 17:45:12.743735   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:12.744200   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:12.744227   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:12.744144   23761 retry.go:31] will retry after 2.279680866s: waiting for machine to come up
	I0725 17:45:15.026530   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:15.026947   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:15.026989   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:15.026903   23761 retry.go:31] will retry after 3.465368523s: waiting for machine to come up
	I0725 17:45:18.496008   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:18.496393   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:18.496420   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:18.496292   23761 retry.go:31] will retry after 3.691118212s: waiting for machine to come up
	I0725 17:45:22.190099   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.190574   23738 main.go:141] libmachine: (ha-174036) Found IP for machine: 192.168.39.165
	I0725 17:45:22.190589   23738 main.go:141] libmachine: (ha-174036) Reserving static IP address...
	I0725 17:45:22.190598   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has current primary IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.191024   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find host DHCP lease matching {name: "ha-174036", mac: "52:54:00:0f:45:3b", ip: "192.168.39.165"} in network mk-ha-174036
	I0725 17:45:22.259473   23738 main.go:141] libmachine: (ha-174036) DBG | Getting to WaitForSSH function...
	I0725 17:45:22.259505   23738 main.go:141] libmachine: (ha-174036) Reserved static IP address: 192.168.39.165
	I0725 17:45:22.259518   23738 main.go:141] libmachine: (ha-174036) Waiting for SSH to be available...
	I0725 17:45:22.261986   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.262346   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.262375   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.262565   23738 main.go:141] libmachine: (ha-174036) DBG | Using SSH client type: external
	I0725 17:45:22.262594   23738 main.go:141] libmachine: (ha-174036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa (-rw-------)
	I0725 17:45:22.262633   23738 main.go:141] libmachine: (ha-174036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:45:22.262646   23738 main.go:141] libmachine: (ha-174036) DBG | About to run SSH command:
	I0725 17:45:22.262661   23738 main.go:141] libmachine: (ha-174036) DBG | exit 0
	I0725 17:45:22.383973   23738 main.go:141] libmachine: (ha-174036) DBG | SSH cmd err, output: <nil>: 
	I0725 17:45:22.384244   23738 main.go:141] libmachine: (ha-174036) KVM machine creation complete!
	I0725 17:45:22.384527   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:22.385028   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:22.385267   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:22.385461   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:45:22.385474   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:22.386912   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:45:22.386924   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:45:22.386929   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:45:22.386934   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.388972   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.389264   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.389288   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.389458   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.389627   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.389755   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.389887   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.390016   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.390209   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.390222   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:45:22.491704   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:45:22.491727   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:45:22.491735   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.494256   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.494534   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.494556   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.494686   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.494849   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.494975   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.495087   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.495251   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.495415   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.495425   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:45:22.600486   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:45:22.600570   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:45:22.600586   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:45:22.600598   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.600843   23738 buildroot.go:166] provisioning hostname "ha-174036"
	I0725 17:45:22.600879   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.601051   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.603640   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.603972   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.603992   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.604140   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.604336   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.604496   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.604743   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.604937   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.605114   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.605129   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036 && echo "ha-174036" | sudo tee /etc/hostname
	I0725 17:45:22.721381   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:45:22.721406   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.724161   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.724578   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.724605   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.724750   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.724962   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.725113   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.725265   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.725429   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.725602   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.725617   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:45:22.836494   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:45:22.836528   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:45:22.836559   23738 buildroot.go:174] setting up certificates
	I0725 17:45:22.836568   23738 provision.go:84] configureAuth start
	I0725 17:45:22.836577   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.836867   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:22.839498   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.839816   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.839838   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.839991   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.842187   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.842512   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.842531   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.842662   23738 provision.go:143] copyHostCerts
	I0725 17:45:22.842686   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:45:22.842718   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:45:22.842729   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:45:22.842813   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:45:22.842919   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:45:22.842951   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:45:22.842960   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:45:22.842999   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:45:22.843069   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:45:22.843092   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:45:22.843101   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:45:22.843141   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:45:22.843217   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036 san=[127.0.0.1 192.168.39.165 ha-174036 localhost minikube]
	I0725 17:45:23.378310   23738 provision.go:177] copyRemoteCerts
	I0725 17:45:23.378376   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:45:23.378398   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.381252   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.381659   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.381689   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.381866   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.382088   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.382221   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.382367   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:23.461843   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:45:23.461909   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:45:23.484737   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:45:23.484824   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0725 17:45:23.506454   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:45:23.506536   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0725 17:45:23.527417   23738 provision.go:87] duration metric: took 690.838248ms to configureAuth
	I0725 17:45:23.527441   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:45:23.527603   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:23.527680   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.530399   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.530720   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.530744   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.530854   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.531033   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.531219   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.531359   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.531495   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:23.531681   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:23.531702   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:45:23.785163   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:45:23.785187   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:45:23.785195   23738 main.go:141] libmachine: (ha-174036) Calling .GetURL
	I0725 17:45:23.786562   23738 main.go:141] libmachine: (ha-174036) DBG | Using libvirt version 6000000
	I0725 17:45:23.788791   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.789097   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.789120   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.789284   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:45:23.789313   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:45:23.789326   23738 client.go:171] duration metric: took 23.356804273s to LocalClient.Create
	I0725 17:45:23.789349   23738 start.go:167] duration metric: took 23.356870648s to libmachine.API.Create "ha-174036"
	I0725 17:45:23.789356   23738 start.go:293] postStartSetup for "ha-174036" (driver="kvm2")
	I0725 17:45:23.789369   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:45:23.789386   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:23.789646   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:45:23.789668   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.791519   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.791858   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.791891   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.791993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.792167   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.792336   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.792451   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:23.873796   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:45:23.877724   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:45:23.877743   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:45:23.877800   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:45:23.877864   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:45:23.877874   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:45:23.877955   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:45:23.886561   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:45:23.909193   23738 start.go:296] duration metric: took 119.821515ms for postStartSetup
	I0725 17:45:23.909245   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:23.909781   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:23.912923   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.913305   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.913328   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.913546   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:23.913716   23738 start.go:128] duration metric: took 23.498242386s to createHost
	I0725 17:45:23.913735   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.915969   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.916280   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.916307   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.916468   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.916635   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.916846   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.916993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.917139   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:23.917317   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:23.917331   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:45:24.024959   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929524.002730715
	
	I0725 17:45:24.024988   23738 fix.go:216] guest clock: 1721929524.002730715
	I0725 17:45:24.024996   23738 fix.go:229] Guest: 2024-07-25 17:45:24.002730715 +0000 UTC Remote: 2024-07-25 17:45:23.913726357 +0000 UTC m=+23.597775412 (delta=89.004358ms)
	I0725 17:45:24.025016   23738 fix.go:200] guest clock delta is within tolerance: 89.004358ms
	I0725 17:45:24.025020   23738 start.go:83] releasing machines lock for "ha-174036", held for 23.609644733s
	I0725 17:45:24.025041   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.025281   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:24.028425   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.028859   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.028888   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.029042   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029518   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029715   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029828   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:45:24.029880   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:24.029955   23738 ssh_runner.go:195] Run: cat /version.json
	I0725 17:45:24.029965   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:24.032752   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.032824   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033140   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.033159   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033175   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.033184   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033287   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:24.033427   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:24.033483   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:24.033581   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:24.033641   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:24.033738   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:24.033792   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:24.033881   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:24.143291   23738 ssh_runner.go:195] Run: systemctl --version
	I0725 17:45:24.149234   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:45:24.301651   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:45:24.307405   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:45:24.307462   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:45:24.322949   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:45:24.322973   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:45:24.323045   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:45:24.339777   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:45:24.353592   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:45:24.353673   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:45:24.366965   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:45:24.380148   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:45:24.496094   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:45:24.655280   23738 docker.go:233] disabling docker service ...
	I0725 17:45:24.655348   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:45:24.668516   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:45:24.680629   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:45:24.788029   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:45:24.895924   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:45:24.910408   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:45:24.927406   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:45:24.927480   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.937032   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:45:24.937128   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.946821   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.965352   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.976399   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:45:24.987018   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.996636   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:25.012084   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:25.021555   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:45:25.030114   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:45:25.030161   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:45:25.041519   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:45:25.050245   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:45:25.156592   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:45:25.283870   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:45:25.283944   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:45:25.288595   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:45:25.288644   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:45:25.291945   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:45:25.328932   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:45:25.329017   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:45:25.355748   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:45:25.382590   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:45:25.383661   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:25.386560   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:25.387040   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:25.387061   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:25.387309   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:45:25.390885   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:45:25.402213   23738 kubeadm.go:883] updating cluster {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:45:25.402319   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:25.402376   23738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:45:25.430493   23738 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 17:45:25.430560   23738 ssh_runner.go:195] Run: which lz4
	I0725 17:45:25.433912   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0725 17:45:25.434009   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 17:45:25.437770   23738 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 17:45:25.437801   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 17:45:26.638853   23738 crio.go:462] duration metric: took 1.20486584s to copy over tarball
	I0725 17:45:26.638922   23738 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 17:45:28.699435   23738 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060481012s)
	I0725 17:45:28.699463   23738 crio.go:469] duration metric: took 2.060587652s to extract the tarball
	I0725 17:45:28.699472   23738 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 17:45:28.736484   23738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:45:28.780302   23738 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:45:28.780335   23738 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:45:28.780346   23738 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0725 17:45:28.780469   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:45:28.780550   23738 ssh_runner.go:195] Run: crio config
	I0725 17:45:28.824121   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:28.824139   23738 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 17:45:28.824147   23738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:45:28.824172   23738 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174036 NodeName:ha-174036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:45:28.824301   23738 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:45:28.824343   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:45:28.824398   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:45:28.840839   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:45:28.840978   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:45:28.841037   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:45:28.849797   23738 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:45:28.849865   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 17:45:28.858373   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0725 17:45:28.873487   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:45:28.888285   23738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0725 17:45:28.903747   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0725 17:45:28.918947   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:45:28.922518   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:45:28.933430   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:45:29.060403   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:45:29.076772   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.165
	I0725 17:45:29.076800   23738 certs.go:194] generating shared ca certs ...
	I0725 17:45:29.076821   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.076985   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:45:29.077052   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:45:29.077071   23738 certs.go:256] generating profile certs ...
	I0725 17:45:29.077134   23738 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:45:29.077151   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt with IP's: []
	I0725 17:45:29.192850   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt ...
	I0725 17:45:29.192880   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt: {Name:mkebf1ec254fc7ad5e59237cbac795cf47e3706f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.193079   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key ...
	I0725 17:45:29.193094   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key: {Name:mk41a12cac673f5052e7c617cf0b303b5f70f17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.193203   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432
	I0725 17:45:29.193221   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.254]
	I0725 17:45:29.327832   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 ...
	I0725 17:45:29.327865   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432: {Name:mkfb038ba87f0fe0746474375f2c8aa6b3f3cca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.328059   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432 ...
	I0725 17:45:29.328077   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432: {Name:mke1eb949d35e1cf45eda64ae6d4d6e75f910032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.328179   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:45:29.328299   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:45:29.328399   23738 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:45:29.328418   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt with IP's: []
	I0725 17:45:29.567193   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt ...
	I0725 17:45:29.567221   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt: {Name:mk147b1179eba45024fd1136e15e3d75cb08a351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.567388   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key ...
	I0725 17:45:29.567398   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key: {Name:mk5fb29b93e9d87cb88e595d391cd56d14f313ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.567464   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:45:29.567480   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:45:29.567490   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:45:29.567502   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:45:29.567513   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:45:29.567523   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:45:29.567535   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:45:29.567546   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:45:29.567597   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:45:29.567630   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:45:29.567639   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:45:29.567662   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:45:29.567683   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:45:29.567703   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:45:29.567737   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:45:29.567761   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.567774   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.567786   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.568301   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:45:29.592957   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:45:29.616055   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:45:29.639081   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:45:29.660472   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 17:45:29.681933   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:45:29.704380   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:45:29.726374   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:45:29.749140   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:45:29.770909   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:45:29.792848   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:45:29.814908   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:45:29.830920   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:45:29.836622   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:45:29.849433   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.853681   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.853730   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.861470   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:45:29.873995   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:45:29.885073   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.889976   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.890033   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.895771   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:45:29.907636   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:45:29.919890   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.925295   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.925357   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.930828   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:45:29.940716   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:45:29.944407   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:45:29.944462   23738 kubeadm.go:392] StartCluster: {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:45:29.944536   23738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:45:29.944593   23738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:45:29.982225   23738 cri.go:89] found id: ""
	I0725 17:45:29.982290   23738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:45:29.991416   23738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:45:30.000464   23738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:45:30.009052   23738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:45:30.009069   23738 kubeadm.go:157] found existing configuration files:
	
	I0725 17:45:30.009110   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:45:30.017488   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 17:45:30.017623   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 17:45:30.027429   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:45:30.036309   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 17:45:30.036434   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 17:45:30.045244   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:45:30.053578   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 17:45:30.053629   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:45:30.062119   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:45:30.069972   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 17:45:30.070019   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:45:30.078925   23738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 17:45:30.298077   23738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 17:45:41.465242   23738 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 17:45:41.465293   23738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 17:45:41.465379   23738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 17:45:41.465488   23738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 17:45:41.465581   23738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 17:45:41.465658   23738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 17:45:41.467196   23738 out.go:204]   - Generating certificates and keys ...
	I0725 17:45:41.467267   23738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 17:45:41.467419   23738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 17:45:41.467497   23738 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 17:45:41.467571   23738 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 17:45:41.467657   23738 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 17:45:41.467725   23738 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 17:45:41.467800   23738 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 17:45:41.467915   23738 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174036 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0725 17:45:41.467989   23738 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 17:45:41.468140   23738 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174036 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0725 17:45:41.468223   23738 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 17:45:41.468278   23738 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 17:45:41.468339   23738 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 17:45:41.468390   23738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 17:45:41.468432   23738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 17:45:41.468480   23738 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 17:45:41.468544   23738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 17:45:41.468611   23738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 17:45:41.468663   23738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 17:45:41.468753   23738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 17:45:41.468816   23738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 17:45:41.470362   23738 out.go:204]   - Booting up control plane ...
	I0725 17:45:41.470443   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 17:45:41.470514   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 17:45:41.470570   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 17:45:41.470684   23738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 17:45:41.470845   23738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 17:45:41.470916   23738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 17:45:41.471061   23738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 17:45:41.471154   23738 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 17:45:41.471208   23738 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001128185s
	I0725 17:45:41.471326   23738 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 17:45:41.471387   23738 kubeadm.go:310] [api-check] The API server is healthy after 5.774209816s
	I0725 17:45:41.471478   23738 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 17:45:41.471597   23738 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 17:45:41.471692   23738 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 17:45:41.471859   23738 kubeadm.go:310] [mark-control-plane] Marking the node ha-174036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 17:45:41.471909   23738 kubeadm.go:310] [bootstrap-token] Using token: xq8hdz.24cgx0m1lq14udqx
	I0725 17:45:41.473116   23738 out.go:204]   - Configuring RBAC rules ...
	I0725 17:45:41.473203   23738 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 17:45:41.473332   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 17:45:41.473462   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 17:45:41.473641   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 17:45:41.473820   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 17:45:41.473896   23738 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 17:45:41.474004   23738 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 17:45:41.474044   23738 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 17:45:41.474098   23738 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 17:45:41.474109   23738 kubeadm.go:310] 
	I0725 17:45:41.474191   23738 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 17:45:41.474200   23738 kubeadm.go:310] 
	I0725 17:45:41.474276   23738 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 17:45:41.474283   23738 kubeadm.go:310] 
	I0725 17:45:41.474317   23738 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 17:45:41.474373   23738 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 17:45:41.474419   23738 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 17:45:41.474428   23738 kubeadm.go:310] 
	I0725 17:45:41.474475   23738 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 17:45:41.474481   23738 kubeadm.go:310] 
	I0725 17:45:41.474523   23738 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 17:45:41.474529   23738 kubeadm.go:310] 
	I0725 17:45:41.474570   23738 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 17:45:41.474635   23738 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 17:45:41.474709   23738 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 17:45:41.474718   23738 kubeadm.go:310] 
	I0725 17:45:41.474816   23738 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 17:45:41.474914   23738 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 17:45:41.474922   23738 kubeadm.go:310] 
	I0725 17:45:41.474984   23738 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xq8hdz.24cgx0m1lq14udqx \
	I0725 17:45:41.475065   23738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 17:45:41.475086   23738 kubeadm.go:310] 	--control-plane 
	I0725 17:45:41.475092   23738 kubeadm.go:310] 
	I0725 17:45:41.475178   23738 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 17:45:41.475185   23738 kubeadm.go:310] 
	I0725 17:45:41.475270   23738 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xq8hdz.24cgx0m1lq14udqx \
	I0725 17:45:41.475402   23738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 17:45:41.475421   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:41.475429   23738 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 17:45:41.477532   23738 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0725 17:45:41.478593   23738 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0725 17:45:41.484967   23738 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 17:45:41.484986   23738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0725 17:45:41.505960   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 17:45:41.830998   23738 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:45:41.831050   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:41.831080   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036 minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=true
	I0725 17:45:41.851557   23738 ops.go:34] apiserver oom_adj: -16
	I0725 17:45:42.051947   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:42.552034   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:43.052204   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:43.552098   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:44.052678   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:44.552101   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:45.051992   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:45.552109   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:46.052037   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:46.552681   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:47.052217   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:47.552608   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:48.052118   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:48.551977   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:49.052647   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:49.552945   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:50.052583   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:50.552590   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:51.052051   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:51.552107   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:52.052883   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:52.552597   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:53.052284   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:53.552703   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:54.052355   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:54.174917   23738 kubeadm.go:1113] duration metric: took 12.343915886s to wait for elevateKubeSystemPrivileges
	I0725 17:45:54.174954   23738 kubeadm.go:394] duration metric: took 24.230496074s to StartCluster
	I0725 17:45:54.174977   23738 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:54.175040   23738 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:54.175696   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:54.175879   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:45:54.175895   23738 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 17:45:54.175871   23738 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:54.175965   23738 addons.go:69] Setting default-storageclass=true in profile "ha-174036"
	I0725 17:45:54.175974   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:45:54.175959   23738 addons.go:69] Setting storage-provisioner=true in profile "ha-174036"
	I0725 17:45:54.175989   23738 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174036"
	I0725 17:45:54.176007   23738 addons.go:234] Setting addon storage-provisioner=true in "ha-174036"
	I0725 17:45:54.176045   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:45:54.176079   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:54.176400   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.176421   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.176432   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.176436   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.191504   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0725 17:45:54.191727   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0725 17:45:54.191938   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.192033   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.192459   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.192483   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.192590   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.192612   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.192864   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.192969   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.193146   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.193385   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.193414   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.195361   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:54.195619   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 17:45:54.196046   23738 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 17:45:54.196183   23738 addons.go:234] Setting addon default-storageclass=true in "ha-174036"
	I0725 17:45:54.196220   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:45:54.196511   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.196536   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.209293   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0725 17:45:54.209809   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.210326   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.210350   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.210787   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.211030   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.211088   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0725 17:45:54.211466   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.211847   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.211870   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.212266   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.212733   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.212783   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.213029   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:54.215301   23738 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:45:54.216768   23738 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:45:54.216784   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:45:54.216797   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:54.219959   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.220356   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:54.220383   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.220561   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:54.220740   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:54.220905   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:54.221059   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:54.227793   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0725 17:45:54.228108   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.228544   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.228561   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.228827   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.229009   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.230303   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:54.230484   23738 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:45:54.230501   23738 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:45:54.230515   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:54.233106   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.233499   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:54.233532   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.233692   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:54.233854   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:54.233995   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:54.234118   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:54.354183   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:45:54.367225   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:45:54.369226   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:45:54.685724   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.685745   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.686003   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.686018   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.686028   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.686035   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.686267   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.686281   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.686392   23738 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0725 17:45:54.686403   23738 round_trippers.go:469] Request Headers:
	I0725 17:45:54.686413   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:45:54.686418   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:45:54.706035   23738 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0725 17:45:54.706509   23738 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0725 17:45:54.706522   23738 round_trippers.go:469] Request Headers:
	I0725 17:45:54.706529   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:45:54.706536   23738 round_trippers.go:473]     Content-Type: application/json
	I0725 17:45:54.706539   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:45:54.717026   23738 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0725 17:45:54.717173   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.717187   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.717481   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.717499   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.942115   23738 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 17:45:55.149587   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:55.149607   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:55.149910   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:55.149925   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:55.149934   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:55.149942   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:55.150235   23738 main.go:141] libmachine: (ha-174036) DBG | Closing plugin on server side
	I0725 17:45:55.150278   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:55.150294   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:55.152016   23738 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 17:45:55.153312   23738 addons.go:510] duration metric: took 977.418617ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 17:45:55.153351   23738 start.go:246] waiting for cluster config update ...
	I0725 17:45:55.153365   23738 start.go:255] writing updated cluster config ...
	I0725 17:45:55.155344   23738 out.go:177] 
	I0725 17:45:55.157105   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:55.157226   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:55.158931   23738 out.go:177] * Starting "ha-174036-m02" control-plane node in "ha-174036" cluster
	I0725 17:45:55.160055   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:55.160080   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:45:55.160161   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:45:55.160175   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:45:55.160244   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:55.160438   23738 start.go:360] acquireMachinesLock for ha-174036-m02: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:45:55.160485   23738 start.go:364] duration metric: took 27.238µs to acquireMachinesLock for "ha-174036-m02"
	I0725 17:45:55.160500   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:55.160569   23738 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0725 17:45:55.162033   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:45:55.162112   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:55.162135   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:55.178063   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0725 17:45:55.178487   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:55.178904   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:55.178922   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:55.179234   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:55.179434   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:45:55.179626   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:45:55.179861   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:45:55.179884   23738 client.go:168] LocalClient.Create starting
	I0725 17:45:55.179923   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:45:55.179959   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:55.179976   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:55.180041   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:45:55.180063   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:55.180079   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:55.180106   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:45:55.180118   23738 main.go:141] libmachine: (ha-174036-m02) Calling .PreCreateCheck
	I0725 17:45:55.180360   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:45:55.180759   23738 main.go:141] libmachine: Creating machine...
	I0725 17:45:55.180773   23738 main.go:141] libmachine: (ha-174036-m02) Calling .Create
	I0725 17:45:55.180930   23738 main.go:141] libmachine: (ha-174036-m02) Creating KVM machine...
	I0725 17:45:55.182197   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found existing default KVM network
	I0725 17:45:55.182309   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found existing private KVM network mk-ha-174036
	I0725 17:45:55.182439   23738 main.go:141] libmachine: (ha-174036-m02) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 ...
	I0725 17:45:55.182465   23738 main.go:141] libmachine: (ha-174036-m02) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:45:55.182515   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.182426   24141 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:55.182612   23738 main.go:141] libmachine: (ha-174036-m02) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:45:55.426913   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.426797   24141 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa...
	I0725 17:45:55.616429   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.616299   24141 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/ha-174036-m02.rawdisk...
	I0725 17:45:55.616456   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Writing magic tar header
	I0725 17:45:55.616467   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Writing SSH key tar header
	I0725 17:45:55.616479   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.616441   24141 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 ...
	I0725 17:45:55.616610   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02
	I0725 17:45:55.616641   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 (perms=drwx------)
	I0725 17:45:55.616651   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:45:55.616666   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:55.616676   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:45:55.616687   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:45:55.616698   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:45:55.616709   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home
	I0725 17:45:55.616721   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Skipping /home - not owner
	I0725 17:45:55.616734   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:45:55.616747   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:45:55.616767   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:45:55.616784   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:45:55.616794   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:45:55.616802   23738 main.go:141] libmachine: (ha-174036-m02) Creating domain...
	I0725 17:45:55.617606   23738 main.go:141] libmachine: (ha-174036-m02) define libvirt domain using xml: 
	I0725 17:45:55.617630   23738 main.go:141] libmachine: (ha-174036-m02) <domain type='kvm'>
	I0725 17:45:55.617641   23738 main.go:141] libmachine: (ha-174036-m02)   <name>ha-174036-m02</name>
	I0725 17:45:55.617652   23738 main.go:141] libmachine: (ha-174036-m02)   <memory unit='MiB'>2200</memory>
	I0725 17:45:55.617661   23738 main.go:141] libmachine: (ha-174036-m02)   <vcpu>2</vcpu>
	I0725 17:45:55.617668   23738 main.go:141] libmachine: (ha-174036-m02)   <features>
	I0725 17:45:55.617678   23738 main.go:141] libmachine: (ha-174036-m02)     <acpi/>
	I0725 17:45:55.617685   23738 main.go:141] libmachine: (ha-174036-m02)     <apic/>
	I0725 17:45:55.617697   23738 main.go:141] libmachine: (ha-174036-m02)     <pae/>
	I0725 17:45:55.617705   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.617714   23738 main.go:141] libmachine: (ha-174036-m02)   </features>
	I0725 17:45:55.617720   23738 main.go:141] libmachine: (ha-174036-m02)   <cpu mode='host-passthrough'>
	I0725 17:45:55.617739   23738 main.go:141] libmachine: (ha-174036-m02)   
	I0725 17:45:55.617750   23738 main.go:141] libmachine: (ha-174036-m02)   </cpu>
	I0725 17:45:55.617756   23738 main.go:141] libmachine: (ha-174036-m02)   <os>
	I0725 17:45:55.617764   23738 main.go:141] libmachine: (ha-174036-m02)     <type>hvm</type>
	I0725 17:45:55.617776   23738 main.go:141] libmachine: (ha-174036-m02)     <boot dev='cdrom'/>
	I0725 17:45:55.617782   23738 main.go:141] libmachine: (ha-174036-m02)     <boot dev='hd'/>
	I0725 17:45:55.617788   23738 main.go:141] libmachine: (ha-174036-m02)     <bootmenu enable='no'/>
	I0725 17:45:55.617795   23738 main.go:141] libmachine: (ha-174036-m02)   </os>
	I0725 17:45:55.617800   23738 main.go:141] libmachine: (ha-174036-m02)   <devices>
	I0725 17:45:55.617807   23738 main.go:141] libmachine: (ha-174036-m02)     <disk type='file' device='cdrom'>
	I0725 17:45:55.617816   23738 main.go:141] libmachine: (ha-174036-m02)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/boot2docker.iso'/>
	I0725 17:45:55.617826   23738 main.go:141] libmachine: (ha-174036-m02)       <target dev='hdc' bus='scsi'/>
	I0725 17:45:55.617831   23738 main.go:141] libmachine: (ha-174036-m02)       <readonly/>
	I0725 17:45:55.617838   23738 main.go:141] libmachine: (ha-174036-m02)     </disk>
	I0725 17:45:55.617844   23738 main.go:141] libmachine: (ha-174036-m02)     <disk type='file' device='disk'>
	I0725 17:45:55.617853   23738 main.go:141] libmachine: (ha-174036-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:45:55.617866   23738 main.go:141] libmachine: (ha-174036-m02)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/ha-174036-m02.rawdisk'/>
	I0725 17:45:55.617874   23738 main.go:141] libmachine: (ha-174036-m02)       <target dev='hda' bus='virtio'/>
	I0725 17:45:55.617880   23738 main.go:141] libmachine: (ha-174036-m02)     </disk>
	I0725 17:45:55.617887   23738 main.go:141] libmachine: (ha-174036-m02)     <interface type='network'>
	I0725 17:45:55.617904   23738 main.go:141] libmachine: (ha-174036-m02)       <source network='mk-ha-174036'/>
	I0725 17:45:55.617923   23738 main.go:141] libmachine: (ha-174036-m02)       <model type='virtio'/>
	I0725 17:45:55.617936   23738 main.go:141] libmachine: (ha-174036-m02)     </interface>
	I0725 17:45:55.617945   23738 main.go:141] libmachine: (ha-174036-m02)     <interface type='network'>
	I0725 17:45:55.617951   23738 main.go:141] libmachine: (ha-174036-m02)       <source network='default'/>
	I0725 17:45:55.617959   23738 main.go:141] libmachine: (ha-174036-m02)       <model type='virtio'/>
	I0725 17:45:55.617964   23738 main.go:141] libmachine: (ha-174036-m02)     </interface>
	I0725 17:45:55.617971   23738 main.go:141] libmachine: (ha-174036-m02)     <serial type='pty'>
	I0725 17:45:55.617978   23738 main.go:141] libmachine: (ha-174036-m02)       <target port='0'/>
	I0725 17:45:55.617987   23738 main.go:141] libmachine: (ha-174036-m02)     </serial>
	I0725 17:45:55.618006   23738 main.go:141] libmachine: (ha-174036-m02)     <console type='pty'>
	I0725 17:45:55.618022   23738 main.go:141] libmachine: (ha-174036-m02)       <target type='serial' port='0'/>
	I0725 17:45:55.618033   23738 main.go:141] libmachine: (ha-174036-m02)     </console>
	I0725 17:45:55.618040   23738 main.go:141] libmachine: (ha-174036-m02)     <rng model='virtio'>
	I0725 17:45:55.618053   23738 main.go:141] libmachine: (ha-174036-m02)       <backend model='random'>/dev/random</backend>
	I0725 17:45:55.618061   23738 main.go:141] libmachine: (ha-174036-m02)     </rng>
	I0725 17:45:55.618067   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.618076   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.618081   23738 main.go:141] libmachine: (ha-174036-m02)   </devices>
	I0725 17:45:55.618088   23738 main.go:141] libmachine: (ha-174036-m02) </domain>
	I0725 17:45:55.618107   23738 main.go:141] libmachine: (ha-174036-m02) 
	I0725 17:45:55.624823   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:4a:ce:b8 in network default
	I0725 17:45:55.625389   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring networks are active...
	I0725 17:45:55.625409   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:55.626160   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring network default is active
	I0725 17:45:55.626581   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring network mk-ha-174036 is active
	I0725 17:45:55.626937   23738 main.go:141] libmachine: (ha-174036-m02) Getting domain xml...
	I0725 17:45:55.627612   23738 main.go:141] libmachine: (ha-174036-m02) Creating domain...
	I0725 17:45:56.833602   23738 main.go:141] libmachine: (ha-174036-m02) Waiting to get IP...
	I0725 17:45:56.834339   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:56.834770   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:56.834797   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:56.834744   24141 retry.go:31] will retry after 234.358388ms: waiting for machine to come up
	I0725 17:45:57.071228   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.071666   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.071728   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.071637   24141 retry.go:31] will retry after 238.148169ms: waiting for machine to come up
	I0725 17:45:57.311048   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.311519   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.311545   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.311472   24141 retry.go:31] will retry after 312.220932ms: waiting for machine to come up
	I0725 17:45:57.624808   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.625230   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.625256   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.625189   24141 retry.go:31] will retry after 519.906509ms: waiting for machine to come up
	I0725 17:45:58.146508   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:58.146952   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:58.146978   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:58.146918   24141 retry.go:31] will retry after 486.541786ms: waiting for machine to come up
	I0725 17:45:58.634623   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:58.635069   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:58.635101   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:58.635014   24141 retry.go:31] will retry after 628.549445ms: waiting for machine to come up
	I0725 17:45:59.265330   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:59.265799   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:59.265824   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:59.265762   24141 retry.go:31] will retry after 770.991951ms: waiting for machine to come up
	I0725 17:46:00.038570   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:00.038986   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:00.039023   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:00.038936   24141 retry.go:31] will retry after 901.347868ms: waiting for machine to come up
	I0725 17:46:00.941394   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:00.941889   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:00.941911   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:00.941846   24141 retry.go:31] will retry after 1.713993666s: waiting for machine to come up
	I0725 17:46:02.657596   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:02.657983   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:02.658001   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:02.657942   24141 retry.go:31] will retry after 1.578532576s: waiting for machine to come up
	I0725 17:46:04.238727   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:04.239149   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:04.239181   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:04.239088   24141 retry.go:31] will retry after 2.686856273s: waiting for machine to come up
	I0725 17:46:06.928339   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:06.928828   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:06.928853   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:06.928780   24141 retry.go:31] will retry after 3.150698622s: waiting for machine to come up
	I0725 17:46:10.082964   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:10.083347   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:10.083370   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:10.083303   24141 retry.go:31] will retry after 4.376886346s: waiting for machine to come up
	I0725 17:46:14.461253   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.461676   23738 main.go:141] libmachine: (ha-174036-m02) Found IP for machine: 192.168.39.197
	I0725 17:46:14.461708   23738 main.go:141] libmachine: (ha-174036-m02) Reserving static IP address...
	I0725 17:46:14.461723   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has current primary IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.462099   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find host DHCP lease matching {name: "ha-174036-m02", mac: "52:54:00:75:a1:05", ip: "192.168.39.197"} in network mk-ha-174036
	I0725 17:46:14.534623   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Getting to WaitForSSH function...
	I0725 17:46:14.534653   23738 main.go:141] libmachine: (ha-174036-m02) Reserved static IP address: 192.168.39.197
	I0725 17:46:14.534667   23738 main.go:141] libmachine: (ha-174036-m02) Waiting for SSH to be available...
	I0725 17:46:14.537445   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.537846   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.537886   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.537940   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using SSH client type: external
	I0725 17:46:14.538017   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa (-rw-------)
	I0725 17:46:14.538053   23738 main.go:141] libmachine: (ha-174036-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:46:14.538071   23738 main.go:141] libmachine: (ha-174036-m02) DBG | About to run SSH command:
	I0725 17:46:14.538085   23738 main.go:141] libmachine: (ha-174036-m02) DBG | exit 0
	I0725 17:46:14.660284   23738 main.go:141] libmachine: (ha-174036-m02) DBG | SSH cmd err, output: <nil>: 
	I0725 17:46:14.660574   23738 main.go:141] libmachine: (ha-174036-m02) KVM machine creation complete!
	I0725 17:46:14.660853   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:46:14.661411   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:14.661599   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:14.661789   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:46:14.661811   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:46:14.663133   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:46:14.663147   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:46:14.663153   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:46:14.663159   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.665750   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.666199   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.666223   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.666369   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.666564   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.666722   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.666860   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.667015   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.667200   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.667211   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:46:14.771419   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:46:14.771452   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:46:14.771464   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.774340   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.774722   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.774745   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.774908   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.775102   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.775329   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.775482   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.775653   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.775849   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.775859   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:46:14.880994   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:46:14.881057   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:46:14.881064   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:46:14.881071   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:14.881308   23738 buildroot.go:166] provisioning hostname "ha-174036-m02"
	I0725 17:46:14.881339   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:14.881508   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.884038   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.884377   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.884403   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.884527   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.884695   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.884883   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.885101   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.885297   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.885450   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.885462   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036-m02 && echo "ha-174036-m02" | sudo tee /etc/hostname
	I0725 17:46:15.012269   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036-m02
	
	I0725 17:46:15.012289   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.015465   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.015835   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.015865   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.016043   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.016222   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.016427   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.016571   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.016789   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.016964   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.016983   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:46:15.133761   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:46:15.133787   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:46:15.133810   23738 buildroot.go:174] setting up certificates
	I0725 17:46:15.133822   23738 provision.go:84] configureAuth start
	I0725 17:46:15.133832   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:15.134145   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:15.136827   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.137173   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.137201   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.137333   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.139909   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.140213   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.140231   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.140417   23738 provision.go:143] copyHostCerts
	I0725 17:46:15.140453   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:46:15.140492   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:46:15.140506   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:46:15.140625   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:46:15.140723   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:46:15.140749   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:46:15.140760   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:46:15.140806   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:46:15.140870   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:46:15.140897   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:46:15.140906   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:46:15.140939   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:46:15.141008   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036-m02 san=[127.0.0.1 192.168.39.197 ha-174036-m02 localhost minikube]
	I0725 17:46:15.336606   23738 provision.go:177] copyRemoteCerts
	I0725 17:46:15.336663   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:46:15.336687   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.339533   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.339895   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.339920   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.340156   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.340367   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.340574   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.340723   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:15.422722   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:46:15.422793   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:46:15.445735   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:46:15.445806   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:46:15.467773   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:46:15.467840   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:46:15.490131   23738 provision.go:87] duration metric: took 356.296388ms to configureAuth
	I0725 17:46:15.490157   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:46:15.490334   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:15.490444   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.493199   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.493589   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.493609   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.493798   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.494074   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.494309   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.494432   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.494584   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.494737   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.494750   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:46:15.757132   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:46:15.757160   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:46:15.757170   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetURL
	I0725 17:46:15.758549   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using libvirt version 6000000
	I0725 17:46:15.760634   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.761094   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.761124   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.761298   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:46:15.761330   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:46:15.761338   23738 client.go:171] duration metric: took 20.581445856s to LocalClient.Create
	I0725 17:46:15.761362   23738 start.go:167] duration metric: took 20.581502574s to libmachine.API.Create "ha-174036"
	I0725 17:46:15.761373   23738 start.go:293] postStartSetup for "ha-174036-m02" (driver="kvm2")
	I0725 17:46:15.761389   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:46:15.761408   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:15.761654   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:46:15.761677   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.763657   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.764015   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.764043   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.764202   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.764422   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.764624   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.764793   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:15.850065   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:46:15.853948   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:46:15.853971   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:46:15.854038   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:46:15.854132   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:46:15.854143   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:46:15.854223   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:46:15.862786   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:46:15.884269   23738 start.go:296] duration metric: took 122.879764ms for postStartSetup
	I0725 17:46:15.884355   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:46:15.884906   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:15.887535   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.887914   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.887941   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.888133   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:46:15.888362   23738 start.go:128] duration metric: took 20.727779703s to createHost
	I0725 17:46:15.888388   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.890674   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.891037   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.891059   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.891178   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.891371   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.891542   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.891677   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.891827   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.891974   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.891983   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:46:15.996783   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929575.968778073
	
	I0725 17:46:15.996810   23738 fix.go:216] guest clock: 1721929575.968778073
	I0725 17:46:15.996820   23738 fix.go:229] Guest: 2024-07-25 17:46:15.968778073 +0000 UTC Remote: 2024-07-25 17:46:15.888376977 +0000 UTC m=+75.572426032 (delta=80.401096ms)
	I0725 17:46:15.996844   23738 fix.go:200] guest clock delta is within tolerance: 80.401096ms
	I0725 17:46:15.996852   23738 start.go:83] releasing machines lock for "ha-174036-m02", held for 20.836357411s
	I0725 17:46:15.996877   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:15.997122   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:16.000081   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.000525   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.000544   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.003289   23738 out.go:177] * Found network options:
	I0725 17:46:16.004808   23738 out.go:177]   - NO_PROXY=192.168.39.165
	W0725 17:46:16.006215   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:46:16.006249   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.006788   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.006983   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.007083   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:46:16.007126   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	W0725 17:46:16.007151   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:46:16.007228   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:46:16.007261   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:16.009867   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.009943   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010280   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.010308   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010344   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.010365   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010452   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:16.010603   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:16.010661   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:16.010843   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:16.010862   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:16.011027   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:16.011021   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:16.011170   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:16.244742   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:46:16.251126   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:46:16.251186   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:46:16.266040   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:46:16.266061   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:46:16.266121   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:46:16.280925   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:46:16.295199   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:46:16.295262   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:46:16.308431   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:46:16.322356   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:46:16.432768   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:46:16.569678   23738 docker.go:233] disabling docker service ...
	I0725 17:46:16.569759   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:46:16.593695   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:46:16.605656   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:46:16.749283   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:46:16.867731   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:46:16.881317   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:46:16.897749   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:46:16.897798   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.906943   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:46:16.906988   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.916138   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.925103   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.934217   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:46:16.943712   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.952891   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.968195   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.977374   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:46:16.985563   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:46:16.985623   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:46:16.997634   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:46:17.006156   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:17.119293   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:46:17.251508   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:46:17.251585   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:46:17.256424   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:46:17.256491   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:46:17.259983   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:46:17.297168   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:46:17.297244   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:46:17.324368   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:46:17.352839   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:46:17.354140   23738 out.go:177]   - env NO_PROXY=192.168.39.165
	I0725 17:46:17.355459   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:17.358126   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:17.358444   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:17.358472   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:17.358653   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:46:17.362321   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:46:17.373370   23738 mustload.go:65] Loading cluster: ha-174036
	I0725 17:46:17.373563   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:17.373796   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:17.373822   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:17.388382   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0725 17:46:17.388767   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:17.389179   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:17.389197   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:17.389473   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:17.389711   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:46:17.391333   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:46:17.391662   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:17.391686   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:17.405579   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
	I0725 17:46:17.405971   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:17.406393   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:17.406415   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:17.406700   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:17.406878   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:46:17.407016   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.197
	I0725 17:46:17.407090   23738 certs.go:194] generating shared ca certs ...
	I0725 17:46:17.407127   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.407260   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:46:17.407323   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:46:17.407334   23738 certs.go:256] generating profile certs ...
	I0725 17:46:17.407402   23738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:46:17.407429   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc
	I0725 17:46:17.407444   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.254]
	I0725 17:46:17.543040   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc ...
	I0725 17:46:17.543066   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc: {Name:mkeb95191f3396f0d9f7d26e0743c170c184b50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.543224   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc ...
	I0725 17:46:17.543238   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc: {Name:mk37c7f5246913dc22856aece47c3693a6ee3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.543312   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:46:17.543432   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:46:17.543550   23738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:46:17.543564   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:46:17.543576   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:46:17.543588   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:46:17.543601   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:46:17.543612   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:46:17.543625   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:46:17.543637   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:46:17.543649   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:46:17.543690   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:46:17.543717   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:46:17.543726   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:46:17.543754   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:46:17.543774   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:46:17.543794   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:46:17.543827   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:46:17.543854   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:46:17.543867   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:46:17.543879   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:17.543908   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:46:17.546947   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:17.547426   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:46:17.547451   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:17.547658   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:46:17.547838   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:46:17.547993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:46:17.548123   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:46:17.620690   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 17:46:17.625937   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 17:46:17.638090   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 17:46:17.642037   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 17:46:17.653544   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 17:46:17.658197   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 17:46:17.669081   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 17:46:17.673329   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0725 17:46:17.683670   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 17:46:17.687844   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 17:46:17.698859   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 17:46:17.702629   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0725 17:46:17.712623   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:46:17.738164   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:46:17.762656   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:46:17.787275   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:46:17.811370   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 17:46:17.835472   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:46:17.859639   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:46:17.883559   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:46:17.907908   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:46:17.932502   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:46:17.956867   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:46:17.981160   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 17:46:17.997763   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 17:46:18.014729   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 17:46:18.031547   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0725 17:46:18.047855   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 17:46:18.063794   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0725 17:46:18.079112   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 17:46:18.094576   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:46:18.099711   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:46:18.108985   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.113038   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.113079   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.118165   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:46:18.127360   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:46:18.136748   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.140558   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.140602   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.145565   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:46:18.154715   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:46:18.164195   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.168088   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.168128   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.173350   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:46:18.182613   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:46:18.186312   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:46:18.186363   23738 kubeadm.go:934] updating node {m02 192.168.39.197 8443 v1.30.3 crio true true} ...
	I0725 17:46:18.186449   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:46:18.186473   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:46:18.186501   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:46:18.201233   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:46:18.201313   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:46:18.201359   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:46:18.209774   23738 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0725 17:46:18.209876   23738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0725 17:46:18.218405   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0725 17:46:18.218430   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:46:18.218435   23738 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0725 17:46:18.218451   23738 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0725 17:46:18.218487   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:46:18.222370   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0725 17:46:18.222396   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0725 17:46:19.022050   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:46:19.022121   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:46:19.026967   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0725 17:46:19.026999   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0725 17:46:19.297222   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:46:19.310991   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:46:19.311077   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:46:19.314911   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0725 17:46:19.314948   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0725 17:46:19.677180   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 17:46:19.685985   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:46:19.702337   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:46:19.724287   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:46:19.739482   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:46:19.743069   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:46:19.754007   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:19.859835   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:46:19.874970   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:46:19.875451   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:19.875502   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:19.890713   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0725 17:46:19.891155   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:19.891608   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:19.891637   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:19.891975   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:19.892175   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:46:19.892362   23738 start.go:317] joinCluster: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:46:19.892452   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0725 17:46:19.892468   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:46:19.895393   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:19.895800   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:46:19.895829   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:19.895944   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:46:19.896093   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:46:19.896227   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:46:19.896407   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:46:20.043032   23738 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:46:20.043070   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token egdkav.g7g6hnq2ok6nvfh4 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m02 --control-plane --apiserver-advertise-address=192.168.39.197 --apiserver-bind-port=8443"
	I0725 17:46:43.362278   23738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token egdkav.g7g6hnq2ok6nvfh4 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m02 --control-plane --apiserver-advertise-address=192.168.39.197 --apiserver-bind-port=8443": (23.319185275s)
	I0725 17:46:43.362316   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0725 17:46:43.952764   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036-m02 minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=false
	I0725 17:46:44.064206   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174036-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0725 17:46:44.172745   23738 start.go:319] duration metric: took 24.280379011s to joinCluster
	I0725 17:46:44.172813   23738 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:46:44.173079   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:44.174256   23738 out.go:177] * Verifying Kubernetes components...
	I0725 17:46:44.175431   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:44.432392   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:46:44.486204   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:46:44.486472   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 17:46:44.486531   23738 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0725 17:46:44.486749   23738 node_ready.go:35] waiting up to 6m0s for node "ha-174036-m02" to be "Ready" ...
	I0725 17:46:44.486862   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:44.486872   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:44.486883   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:44.486890   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:44.498977   23738 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0725 17:46:44.987457   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:44.987477   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:44.987485   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:44.987488   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:44.991792   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:45.487749   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:45.487767   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:45.487775   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:45.487779   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:45.506677   23738 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0725 17:46:45.987641   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:45.987660   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:45.987667   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:45.987671   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:45.990820   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:46.487804   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:46.487830   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:46.487841   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:46.487847   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:46.490960   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:46.491598   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:46.986998   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:46.987020   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:46.987031   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:46.987037   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:46.992238   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:46:47.487433   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:47.487456   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:47.487464   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:47.487469   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:47.490809   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:47.986945   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:47.986966   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:47.986978   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:47.986985   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:47.990837   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:48.487058   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:48.487079   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:48.487088   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:48.487091   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:48.490859   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:48.491656   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:48.987057   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:48.987078   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:48.987086   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:48.987090   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:48.990291   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:49.487142   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:49.487161   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:49.487169   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:49.487177   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:49.490373   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:49.987828   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:49.987849   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:49.987857   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:49.987861   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.077985   23738 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I0725 17:46:50.487105   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:50.487137   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:50.487144   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:50.487148   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.490951   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:50.987897   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:50.987917   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:50.987925   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:50.987928   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.991037   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:50.991685   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:51.486930   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:51.486949   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:51.486956   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:51.486961   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:51.490311   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:51.987313   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:51.987344   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:51.987355   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:51.987361   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:51.990610   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:52.487171   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:52.487197   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:52.487216   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:52.487222   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:52.490341   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:52.987331   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:52.987353   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:52.987361   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:52.987366   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:52.990592   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:53.487689   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:53.487711   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:53.487719   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:53.487723   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:53.490971   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:53.491391   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:53.987827   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:53.987848   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:53.987856   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:53.987861   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:53.990984   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:54.487464   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:54.487486   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:54.487495   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:54.487499   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:54.490978   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:54.986989   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:54.987013   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:54.987021   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:54.987024   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:54.990625   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:55.487831   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:55.487858   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:55.487869   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:55.487876   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:55.491327   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:55.491818   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:55.987146   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:55.987166   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:55.987175   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:55.987179   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:55.990574   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:56.487954   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:56.487976   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:56.487984   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:56.487989   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:56.491289   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:56.987923   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:56.987945   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:56.987955   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:56.987960   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:56.991204   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:57.487631   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:57.487651   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:57.487659   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:57.487666   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:57.490533   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:46:57.987588   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:57.987612   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:57.987620   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:57.987624   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:57.990687   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:57.991285   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:58.487618   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:58.487639   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:58.487647   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:58.487651   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:58.490842   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:58.987837   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:58.987856   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:58.987864   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:58.987870   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:58.990761   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:46:59.487374   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:59.487394   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:59.487403   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:59.487406   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:59.492401   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:59.987374   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:59.987393   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:59.987406   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:59.987410   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:59.991439   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:59.992125   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:47:00.487741   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:00.487762   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:00.487770   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:00.487774   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:00.491187   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:00.987289   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:00.987315   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:00.987323   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:00.987326   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:00.990709   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.487594   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:01.487619   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.487626   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.487630   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.491124   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.987142   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:01.987168   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.987178   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.987183   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.990992   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.991771   23738 node_ready.go:49] node "ha-174036-m02" has status "Ready":"True"
	I0725 17:47:01.991787   23738 node_ready.go:38] duration metric: took 17.505006515s for node "ha-174036-m02" to be "Ready" ...
	I0725 17:47:01.991795   23738 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:47:01.991849   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:01.991857   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.991864   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.991868   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.997924   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:47:02.004622   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.004712   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-flblg
	I0725 17:47:02.004723   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.004733   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.004740   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.007973   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.008557   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.008570   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.008577   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.008580   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.011288   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.011925   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.011942   23738 pod_ready.go:81] duration metric: took 7.296597ms for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.011950   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.011993   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vtr9p
	I0725 17:47:02.012000   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.012006   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.012011   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.014637   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.015210   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.015224   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.015232   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.015237   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.017977   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.018537   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.018552   23738 pod_ready.go:81] duration metric: took 6.596031ms for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.018563   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.018615   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036
	I0725 17:47:02.018627   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.018636   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.018642   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.021772   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.022544   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.022558   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.022570   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.022576   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.025266   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.026133   23738 pod_ready.go:92] pod "etcd-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.026146   23738 pod_ready.go:81] duration metric: took 7.576965ms for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.026154   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.026193   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m02
	I0725 17:47:02.026200   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.026206   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.026209   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.028923   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.029717   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.029731   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.029742   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.029748   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.032160   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.032657   23738 pod_ready.go:92] pod "etcd-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.032673   23738 pod_ready.go:81] duration metric: took 6.513801ms for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.032693   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.188058   23738 request.go:629] Waited for 155.306844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:47:02.188125   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:47:02.188131   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.188139   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.188145   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.191508   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.388043   23738 request.go:629] Waited for 194.375732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.388137   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.388145   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.388157   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.388168   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.391817   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.392515   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.392550   23738 pod_ready.go:81] duration metric: took 359.843486ms for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.392569   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.587205   23738 request.go:629] Waited for 194.495232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:47:02.587262   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:47:02.587267   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.587276   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.587281   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.590717   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.787801   23738 request.go:629] Waited for 196.393939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.787858   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.787863   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.787871   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.787877   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.791014   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.791717   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.791736   23738 pod_ready.go:81] duration metric: took 399.154295ms for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.791748   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.987879   23738 request.go:629] Waited for 196.064166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:47:02.987931   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:47:02.987936   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.987943   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.987948   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.991504   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.187658   23738 request.go:629] Waited for 195.368899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.187737   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.187746   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.187758   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.187767   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.190876   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.191453   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.191473   23738 pod_ready.go:81] duration metric: took 399.71658ms for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.191487   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.387434   23738 request.go:629] Waited for 195.878721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:47:03.387500   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:47:03.387505   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.387513   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.387518   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.390724   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.587718   23738 request.go:629] Waited for 196.356735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:03.587785   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:03.587790   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.587798   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.587801   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.590730   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:03.591204   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.591224   23738 pod_ready.go:81] duration metric: took 399.728826ms for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.591241   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.787199   23738 request.go:629] Waited for 195.760729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:47:03.787258   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:47:03.787265   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.787276   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.787284   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.790752   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.987529   23738 request.go:629] Waited for 196.300522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.987598   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.987604   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.987612   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.987616   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.990728   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.991455   23738 pod_ready.go:92] pod "kube-proxy-s6jdn" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.991476   23738 pod_ready.go:81] duration metric: took 400.22747ms for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.991488   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.187504   23738 request.go:629] Waited for 195.922258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:47:04.187573   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:47:04.187581   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.187593   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.187603   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.190592   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:04.387152   23738 request.go:629] Waited for 195.96497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:04.387227   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:04.387233   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.387241   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.387246   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.390491   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.391017   23738 pod_ready.go:92] pod "kube-proxy-xwvdm" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:04.391033   23738 pod_ready.go:81] duration metric: took 399.537258ms for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.391045   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.587153   23738 request.go:629] Waited for 196.034043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:47:04.587216   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:47:04.587222   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.587230   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.587234   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.590405   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.787653   23738 request.go:629] Waited for 196.383457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:04.787704   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:04.787709   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.787717   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.787721   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.790933   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.791508   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:04.791530   23738 pod_ready.go:81] duration metric: took 400.476886ms for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.791551   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.988176   23738 request.go:629] Waited for 196.552995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:47:04.988251   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:47:04.988258   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.988265   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.988270   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.991506   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.187255   23738 request.go:629] Waited for 195.282705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:05.187325   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:05.187330   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.187337   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.187342   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.191136   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.192274   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:05.192298   23738 pod_ready.go:81] duration metric: took 400.736873ms for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:05.192309   23738 pod_ready.go:38] duration metric: took 3.200502465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:47:05.192352   23738 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:47:05.192410   23738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:47:05.207600   23738 api_server.go:72] duration metric: took 21.034747687s to wait for apiserver process to appear ...
	I0725 17:47:05.207629   23738 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:47:05.207654   23738 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0725 17:47:05.216095   23738 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0725 17:47:05.216153   23738 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0725 17:47:05.216160   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.216168   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.216171   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.217820   23738 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 17:47:05.217918   23738 api_server.go:141] control plane version: v1.30.3
	I0725 17:47:05.217936   23738 api_server.go:131] duration metric: took 10.299137ms to wait for apiserver health ...
	I0725 17:47:05.217946   23738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:47:05.387375   23738 request.go:629] Waited for 169.360683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.387456   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.387462   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.387472   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.387480   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.392825   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:47:05.397101   23738 system_pods.go:59] 17 kube-system pods found
	I0725 17:47:05.397124   23738 system_pods.go:61] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:47:05.397128   23738 system_pods.go:61] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:47:05.397133   23738 system_pods.go:61] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:47:05.397136   23738 system_pods.go:61] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:47:05.397139   23738 system_pods.go:61] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:47:05.397142   23738 system_pods.go:61] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:47:05.397145   23738 system_pods.go:61] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:47:05.397147   23738 system_pods.go:61] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:47:05.397150   23738 system_pods.go:61] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:47:05.397153   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:47:05.397155   23738 system_pods.go:61] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:47:05.397158   23738 system_pods.go:61] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:47:05.397160   23738 system_pods.go:61] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:47:05.397163   23738 system_pods.go:61] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:47:05.397166   23738 system_pods.go:61] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:47:05.397168   23738 system_pods.go:61] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:47:05.397171   23738 system_pods.go:61] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:47:05.397176   23738 system_pods.go:74] duration metric: took 179.224406ms to wait for pod list to return data ...
	I0725 17:47:05.397190   23738 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:47:05.587416   23738 request.go:629] Waited for 190.161381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:47:05.587517   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:47:05.587525   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.587533   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.587540   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.590849   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.591135   23738 default_sa.go:45] found service account: "default"
	I0725 17:47:05.591157   23738 default_sa.go:55] duration metric: took 193.957914ms for default service account to be created ...
	I0725 17:47:05.591167   23738 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:47:05.787604   23738 request.go:629] Waited for 196.37118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.787675   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.787683   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.787692   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.787696   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.793242   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:47:05.798193   23738 system_pods.go:86] 17 kube-system pods found
	I0725 17:47:05.798219   23738 system_pods.go:89] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:47:05.798225   23738 system_pods.go:89] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:47:05.798230   23738 system_pods.go:89] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:47:05.798234   23738 system_pods.go:89] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:47:05.798238   23738 system_pods.go:89] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:47:05.798242   23738 system_pods.go:89] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:47:05.798246   23738 system_pods.go:89] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:47:05.798250   23738 system_pods.go:89] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:47:05.798255   23738 system_pods.go:89] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:47:05.798263   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:47:05.798266   23738 system_pods.go:89] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:47:05.798270   23738 system_pods.go:89] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:47:05.798275   23738 system_pods.go:89] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:47:05.798279   23738 system_pods.go:89] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:47:05.798285   23738 system_pods.go:89] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:47:05.798288   23738 system_pods.go:89] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:47:05.798291   23738 system_pods.go:89] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:47:05.798299   23738 system_pods.go:126] duration metric: took 207.125612ms to wait for k8s-apps to be running ...
	I0725 17:47:05.798307   23738 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:47:05.798359   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:47:05.812296   23738 system_svc.go:56] duration metric: took 13.974348ms WaitForService to wait for kubelet
	I0725 17:47:05.812345   23738 kubeadm.go:582] duration metric: took 21.63949505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:47:05.812372   23738 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:47:05.987744   23738 request.go:629] Waited for 175.278659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0725 17:47:05.987809   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0725 17:47:05.987816   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.987832   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.987842   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.991239   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.992165   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:47:05.992191   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:47:05.992206   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:47:05.992212   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:47:05.992220   23738 node_conditions.go:105] duration metric: took 179.836812ms to run NodePressure ...
	I0725 17:47:05.992235   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:47:05.992270   23738 start.go:255] writing updated cluster config ...
	I0725 17:47:05.994244   23738 out.go:177] 
	I0725 17:47:05.995505   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:05.995594   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:05.998012   23738 out.go:177] * Starting "ha-174036-m03" control-plane node in "ha-174036" cluster
	I0725 17:47:05.999095   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:47:05.999118   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:47:05.999208   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:47:05.999220   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:47:05.999312   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:05.999474   23738 start.go:360] acquireMachinesLock for ha-174036-m03: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:47:05.999519   23738 start.go:364] duration metric: took 24.854µs to acquireMachinesLock for "ha-174036-m03"
	I0725 17:47:05.999541   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:05.999680   23738 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0725 17:47:06.001046   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:47:06.001134   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:06.001175   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:06.016185   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I0725 17:47:06.016629   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:06.017035   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:06.017056   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:06.017419   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:06.017634   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:06.017758   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:06.017903   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:47:06.017941   23738 client.go:168] LocalClient.Create starting
	I0725 17:47:06.017980   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:47:06.018046   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:47:06.018065   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:47:06.018115   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:47:06.018139   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:47:06.018150   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:47:06.018168   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:47:06.018176   23738 main.go:141] libmachine: (ha-174036-m03) Calling .PreCreateCheck
	I0725 17:47:06.018375   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:06.018882   23738 main.go:141] libmachine: Creating machine...
	I0725 17:47:06.018897   23738 main.go:141] libmachine: (ha-174036-m03) Calling .Create
	I0725 17:47:06.019021   23738 main.go:141] libmachine: (ha-174036-m03) Creating KVM machine...
	I0725 17:47:06.020239   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found existing default KVM network
	I0725 17:47:06.020312   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found existing private KVM network mk-ha-174036
	I0725 17:47:06.020486   23738 main.go:141] libmachine: (ha-174036-m03) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 ...
	I0725 17:47:06.020515   23738 main.go:141] libmachine: (ha-174036-m03) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:47:06.020527   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.020448   24535 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:47:06.020673   23738 main.go:141] libmachine: (ha-174036-m03) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:47:06.243986   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.243871   24535 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa...
	I0725 17:47:06.415514   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.415394   24535 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/ha-174036-m03.rawdisk...
	I0725 17:47:06.415552   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Writing magic tar header
	I0725 17:47:06.415569   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Writing SSH key tar header
	I0725 17:47:06.415581   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.415502   24535 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 ...
	I0725 17:47:06.415599   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03
	I0725 17:47:06.415614   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 (perms=drwx------)
	I0725 17:47:06.415624   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:47:06.415635   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:47:06.415648   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:47:06.415662   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:47:06.415678   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:47:06.415690   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:47:06.415702   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:47:06.415713   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:47:06.415722   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:47:06.415735   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:47:06.415745   23738 main.go:141] libmachine: (ha-174036-m03) Creating domain...
	I0725 17:47:06.415756   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home
	I0725 17:47:06.415766   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Skipping /home - not owner
	I0725 17:47:06.416796   23738 main.go:141] libmachine: (ha-174036-m03) define libvirt domain using xml: 
	I0725 17:47:06.416815   23738 main.go:141] libmachine: (ha-174036-m03) <domain type='kvm'>
	I0725 17:47:06.416823   23738 main.go:141] libmachine: (ha-174036-m03)   <name>ha-174036-m03</name>
	I0725 17:47:06.416828   23738 main.go:141] libmachine: (ha-174036-m03)   <memory unit='MiB'>2200</memory>
	I0725 17:47:06.416834   23738 main.go:141] libmachine: (ha-174036-m03)   <vcpu>2</vcpu>
	I0725 17:47:06.416839   23738 main.go:141] libmachine: (ha-174036-m03)   <features>
	I0725 17:47:06.416845   23738 main.go:141] libmachine: (ha-174036-m03)     <acpi/>
	I0725 17:47:06.416850   23738 main.go:141] libmachine: (ha-174036-m03)     <apic/>
	I0725 17:47:06.416857   23738 main.go:141] libmachine: (ha-174036-m03)     <pae/>
	I0725 17:47:06.416862   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.416867   23738 main.go:141] libmachine: (ha-174036-m03)   </features>
	I0725 17:47:06.416872   23738 main.go:141] libmachine: (ha-174036-m03)   <cpu mode='host-passthrough'>
	I0725 17:47:06.416878   23738 main.go:141] libmachine: (ha-174036-m03)   
	I0725 17:47:06.416885   23738 main.go:141] libmachine: (ha-174036-m03)   </cpu>
	I0725 17:47:06.416891   23738 main.go:141] libmachine: (ha-174036-m03)   <os>
	I0725 17:47:06.416897   23738 main.go:141] libmachine: (ha-174036-m03)     <type>hvm</type>
	I0725 17:47:06.416903   23738 main.go:141] libmachine: (ha-174036-m03)     <boot dev='cdrom'/>
	I0725 17:47:06.416908   23738 main.go:141] libmachine: (ha-174036-m03)     <boot dev='hd'/>
	I0725 17:47:06.416913   23738 main.go:141] libmachine: (ha-174036-m03)     <bootmenu enable='no'/>
	I0725 17:47:06.416920   23738 main.go:141] libmachine: (ha-174036-m03)   </os>
	I0725 17:47:06.416925   23738 main.go:141] libmachine: (ha-174036-m03)   <devices>
	I0725 17:47:06.416932   23738 main.go:141] libmachine: (ha-174036-m03)     <disk type='file' device='cdrom'>
	I0725 17:47:06.416941   23738 main.go:141] libmachine: (ha-174036-m03)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/boot2docker.iso'/>
	I0725 17:47:06.416952   23738 main.go:141] libmachine: (ha-174036-m03)       <target dev='hdc' bus='scsi'/>
	I0725 17:47:06.416963   23738 main.go:141] libmachine: (ha-174036-m03)       <readonly/>
	I0725 17:47:06.416973   23738 main.go:141] libmachine: (ha-174036-m03)     </disk>
	I0725 17:47:06.416985   23738 main.go:141] libmachine: (ha-174036-m03)     <disk type='file' device='disk'>
	I0725 17:47:06.416995   23738 main.go:141] libmachine: (ha-174036-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:47:06.417006   23738 main.go:141] libmachine: (ha-174036-m03)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/ha-174036-m03.rawdisk'/>
	I0725 17:47:06.417016   23738 main.go:141] libmachine: (ha-174036-m03)       <target dev='hda' bus='virtio'/>
	I0725 17:47:06.417054   23738 main.go:141] libmachine: (ha-174036-m03)     </disk>
	I0725 17:47:06.417080   23738 main.go:141] libmachine: (ha-174036-m03)     <interface type='network'>
	I0725 17:47:06.417090   23738 main.go:141] libmachine: (ha-174036-m03)       <source network='mk-ha-174036'/>
	I0725 17:47:06.417102   23738 main.go:141] libmachine: (ha-174036-m03)       <model type='virtio'/>
	I0725 17:47:06.417129   23738 main.go:141] libmachine: (ha-174036-m03)     </interface>
	I0725 17:47:06.417150   23738 main.go:141] libmachine: (ha-174036-m03)     <interface type='network'>
	I0725 17:47:06.417165   23738 main.go:141] libmachine: (ha-174036-m03)       <source network='default'/>
	I0725 17:47:06.417177   23738 main.go:141] libmachine: (ha-174036-m03)       <model type='virtio'/>
	I0725 17:47:06.417190   23738 main.go:141] libmachine: (ha-174036-m03)     </interface>
	I0725 17:47:06.417201   23738 main.go:141] libmachine: (ha-174036-m03)     <serial type='pty'>
	I0725 17:47:06.417211   23738 main.go:141] libmachine: (ha-174036-m03)       <target port='0'/>
	I0725 17:47:06.417225   23738 main.go:141] libmachine: (ha-174036-m03)     </serial>
	I0725 17:47:06.417237   23738 main.go:141] libmachine: (ha-174036-m03)     <console type='pty'>
	I0725 17:47:06.417247   23738 main.go:141] libmachine: (ha-174036-m03)       <target type='serial' port='0'/>
	I0725 17:47:06.417257   23738 main.go:141] libmachine: (ha-174036-m03)     </console>
	I0725 17:47:06.417267   23738 main.go:141] libmachine: (ha-174036-m03)     <rng model='virtio'>
	I0725 17:47:06.417281   23738 main.go:141] libmachine: (ha-174036-m03)       <backend model='random'>/dev/random</backend>
	I0725 17:47:06.417292   23738 main.go:141] libmachine: (ha-174036-m03)     </rng>
	I0725 17:47:06.417303   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.417327   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.417339   23738 main.go:141] libmachine: (ha-174036-m03)   </devices>
	I0725 17:47:06.417349   23738 main.go:141] libmachine: (ha-174036-m03) </domain>
	I0725 17:47:06.417359   23738 main.go:141] libmachine: (ha-174036-m03) 
	I0725 17:47:06.423941   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:d2:b9:6e in network default
	I0725 17:47:06.424555   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring networks are active...
	I0725 17:47:06.424587   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:06.425393   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring network default is active
	I0725 17:47:06.425810   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring network mk-ha-174036 is active
	I0725 17:47:06.426261   23738 main.go:141] libmachine: (ha-174036-m03) Getting domain xml...
	I0725 17:47:06.427092   23738 main.go:141] libmachine: (ha-174036-m03) Creating domain...
	I0725 17:47:07.634394   23738 main.go:141] libmachine: (ha-174036-m03) Waiting to get IP...
	I0725 17:47:07.635375   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:07.635795   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:07.635840   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:07.635779   24535 retry.go:31] will retry after 276.28905ms: waiting for machine to come up
	I0725 17:47:07.913228   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:07.913632   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:07.913665   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:07.913587   24535 retry.go:31] will retry after 312.407761ms: waiting for machine to come up
	I0725 17:47:08.228074   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:08.228534   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:08.228559   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:08.228485   24535 retry.go:31] will retry after 351.367598ms: waiting for machine to come up
	I0725 17:47:08.581023   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:08.581512   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:08.581547   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:08.581458   24535 retry.go:31] will retry after 446.660652ms: waiting for machine to come up
	I0725 17:47:09.030021   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:09.030503   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:09.030523   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:09.030459   24535 retry.go:31] will retry after 522.331171ms: waiting for machine to come up
	I0725 17:47:09.554166   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:09.554592   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:09.554621   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:09.554549   24535 retry.go:31] will retry after 586.124916ms: waiting for machine to come up
	I0725 17:47:10.141876   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:10.142310   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:10.142341   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:10.142264   24535 retry.go:31] will retry after 1.030881544s: waiting for machine to come up
	I0725 17:47:11.175199   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:11.175672   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:11.175703   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:11.175632   24535 retry.go:31] will retry after 1.173789187s: waiting for machine to come up
	I0725 17:47:12.351103   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:12.351627   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:12.351655   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:12.351558   24535 retry.go:31] will retry after 1.456003509s: waiting for machine to come up
	I0725 17:47:13.809169   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:13.809755   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:13.809781   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:13.809690   24535 retry.go:31] will retry after 2.262366194s: waiting for machine to come up
	I0725 17:47:16.074108   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:16.074663   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:16.074705   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:16.074637   24535 retry.go:31] will retry after 1.83642278s: waiting for machine to come up
	I0725 17:47:17.913594   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:17.914068   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:17.914110   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:17.914028   24535 retry.go:31] will retry after 2.300261449s: waiting for machine to come up
	I0725 17:47:20.217284   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:20.217819   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:20.217845   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:20.217749   24535 retry.go:31] will retry after 3.900460116s: waiting for machine to come up
	I0725 17:47:24.121432   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:24.121920   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:24.121948   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:24.121884   24535 retry.go:31] will retry after 4.780794251s: waiting for machine to come up
	I0725 17:47:28.906153   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.906612   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.906651   23738 main.go:141] libmachine: (ha-174036-m03) Found IP for machine: 192.168.39.253
	I0725 17:47:28.906676   23738 main.go:141] libmachine: (ha-174036-m03) Reserving static IP address...
	I0725 17:47:28.907028   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find host DHCP lease matching {name: "ha-174036-m03", mac: "52:54:00:44:8c:91", ip: "192.168.39.253"} in network mk-ha-174036
	I0725 17:47:28.979167   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Getting to WaitForSSH function...
	I0725 17:47:28.979197   23738 main.go:141] libmachine: (ha-174036-m03) Reserved static IP address: 192.168.39.253
	I0725 17:47:28.979210   23738 main.go:141] libmachine: (ha-174036-m03) Waiting for SSH to be available...
	I0725 17:47:28.981966   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.982399   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:28.982424   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.982612   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using SSH client type: external
	I0725 17:47:28.982637   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa (-rw-------)
	I0725 17:47:28.982664   23738 main.go:141] libmachine: (ha-174036-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:47:28.982678   23738 main.go:141] libmachine: (ha-174036-m03) DBG | About to run SSH command:
	I0725 17:47:28.982691   23738 main.go:141] libmachine: (ha-174036-m03) DBG | exit 0
	I0725 17:47:29.104524   23738 main.go:141] libmachine: (ha-174036-m03) DBG | SSH cmd err, output: <nil>: 
	I0725 17:47:29.104792   23738 main.go:141] libmachine: (ha-174036-m03) KVM machine creation complete!
	I0725 17:47:29.105082   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:29.105588   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:29.105812   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:29.105968   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:47:29.105982   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:47:29.107287   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:47:29.107300   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:47:29.107305   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:47:29.107311   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.109674   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.110232   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.110247   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.110490   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.110674   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.110822   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.110993   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.111133   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.111379   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.111406   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:47:29.211331   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:47:29.211353   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:47:29.211365   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.214126   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.214477   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.214506   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.214720   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.214934   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.215100   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.215258   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.215395   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.215555   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.215574   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:47:29.316900   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:47:29.316991   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:47:29.317005   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:47:29.317013   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.317252   23738 buildroot.go:166] provisioning hostname "ha-174036-m03"
	I0725 17:47:29.317280   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.317469   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.320169   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.320705   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.320741   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.320944   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.321149   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.321335   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.321526   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.321704   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.321855   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.321870   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036-m03 && echo "ha-174036-m03" | sudo tee /etc/hostname
	I0725 17:47:29.441455   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036-m03
	
	I0725 17:47:29.441483   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.444461   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.444839   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.444855   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.445070   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.445250   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.445430   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.445615   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.445789   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.445952   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.445966   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:47:29.561536   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:47:29.561568   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:47:29.561586   23738 buildroot.go:174] setting up certificates
	I0725 17:47:29.561595   23738 provision.go:84] configureAuth start
	I0725 17:47:29.561607   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.561852   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:29.564773   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.565253   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.565279   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.565506   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.568428   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.568915   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.568945   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.569100   23738 provision.go:143] copyHostCerts
	I0725 17:47:29.569133   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:47:29.569171   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:47:29.569181   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:47:29.569265   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:47:29.569360   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:47:29.569384   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:47:29.569393   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:47:29.569426   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:47:29.569510   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:47:29.569539   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:47:29.569548   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:47:29.569596   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:47:29.569672   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036-m03 san=[127.0.0.1 192.168.39.253 ha-174036-m03 localhost minikube]
	I0725 17:47:29.755228   23738 provision.go:177] copyRemoteCerts
	I0725 17:47:29.755279   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:47:29.755301   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.758170   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.758515   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.758583   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.758689   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.758879   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.759063   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.759224   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:29.837734   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:47:29.837823   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:47:29.863548   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:47:29.863610   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:47:29.887142   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:47:29.887207   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:47:29.908900   23738 provision.go:87] duration metric: took 347.291166ms to configureAuth
	I0725 17:47:29.908928   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:47:29.909156   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:29.909237   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.912126   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.912498   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.912524   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.912744   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.912902   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.913051   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.913125   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.913254   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.913428   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.913447   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:47:30.188871   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:47:30.188915   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:47:30.188927   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetURL
	I0725 17:47:30.190321   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using libvirt version 6000000
	I0725 17:47:30.192495   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.192847   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.192867   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.193018   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:47:30.193040   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:47:30.193046   23738 client.go:171] duration metric: took 24.17509551s to LocalClient.Create
	I0725 17:47:30.193077   23738 start.go:167] duration metric: took 24.175175089s to libmachine.API.Create "ha-174036"
	I0725 17:47:30.193090   23738 start.go:293] postStartSetup for "ha-174036-m03" (driver="kvm2")
	I0725 17:47:30.193103   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:47:30.193127   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.193342   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:47:30.193381   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.195929   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.196262   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.196286   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.196468   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.196661   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.196786   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.196934   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.274721   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:47:30.278949   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:47:30.278974   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:47:30.279050   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:47:30.279138   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:47:30.279149   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:47:30.279270   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:47:30.288261   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:47:30.311910   23738 start.go:296] duration metric: took 118.808085ms for postStartSetup
	I0725 17:47:30.311982   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:30.312607   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:30.315653   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.316044   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.316070   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.316427   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:30.316631   23738 start.go:128] duration metric: took 24.31693959s to createHost
	I0725 17:47:30.316652   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.318999   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.319393   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.319421   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.319554   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.319735   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.319887   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.320039   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.320184   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:30.320394   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:30.320407   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:47:30.420797   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929650.379347023
	
	I0725 17:47:30.420830   23738 fix.go:216] guest clock: 1721929650.379347023
	I0725 17:47:30.420843   23738 fix.go:229] Guest: 2024-07-25 17:47:30.379347023 +0000 UTC Remote: 2024-07-25 17:47:30.316641621 +0000 UTC m=+150.000690675 (delta=62.705402ms)
	I0725 17:47:30.420867   23738 fix.go:200] guest clock delta is within tolerance: 62.705402ms
	I0725 17:47:30.420874   23738 start.go:83] releasing machines lock for "ha-174036-m03", held for 24.421343893s
	I0725 17:47:30.420898   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.421209   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:30.424796   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.425218   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.425244   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.426980   23738 out.go:177] * Found network options:
	I0725 17:47:30.428405   23738 out.go:177]   - NO_PROXY=192.168.39.165,192.168.39.197
	W0725 17:47:30.429737   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 17:47:30.429768   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:47:30.429787   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430386   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430612   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430731   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:47:30.430770   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	W0725 17:47:30.430824   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 17:47:30.430853   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:47:30.430981   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:47:30.431009   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.433666   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.433923   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434113   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.434139   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434306   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.434333   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434346   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.434531   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.434539   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.434681   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.434751   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.434825   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.434911   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.434968   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.665372   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:47:30.671008   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:47:30.671083   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:47:30.687466   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:47:30.687490   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:47:30.687589   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:47:30.704846   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:47:30.718497   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:47:30.718557   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:47:30.734205   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:47:30.747700   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:47:30.877079   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:47:31.022238   23738 docker.go:233] disabling docker service ...
	I0725 17:47:31.022307   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:47:31.035702   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:47:31.047950   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:47:31.168087   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:47:31.294928   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:47:31.308064   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:47:31.325628   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:47:31.325689   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.335135   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:47:31.335209   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.344896   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.354598   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.364175   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:47:31.374418   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.383970   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.400144   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.409589   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:47:31.418301   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:47:31.418348   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:47:31.429829   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:47:31.439026   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:31.567752   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:47:31.697089   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:47:31.697150   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:47:31.701513   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:47:31.701591   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:47:31.705333   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:47:31.744775   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:47:31.744860   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:47:31.773053   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:47:31.802779   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:47:31.804281   23738 out.go:177]   - env NO_PROXY=192.168.39.165
	I0725 17:47:31.805566   23738 out.go:177]   - env NO_PROXY=192.168.39.165,192.168.39.197
	I0725 17:47:31.806678   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:31.809588   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:31.810014   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:31.810040   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:31.810252   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:47:31.814039   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:47:31.826045   23738 mustload.go:65] Loading cluster: ha-174036
	I0725 17:47:31.826299   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:31.826543   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:31.826577   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:31.841041   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0725 17:47:31.841482   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:31.841992   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:31.842016   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:31.842322   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:31.842497   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:47:31.843997   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:47:31.844306   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:31.844362   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:31.859540   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0725 17:47:31.859990   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:31.860424   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:31.860445   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:31.860735   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:31.861392   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:47:31.861548   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.253
	I0725 17:47:31.861558   23738 certs.go:194] generating shared ca certs ...
	I0725 17:47:31.861570   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.861695   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:47:31.861732   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:47:31.861739   23738 certs.go:256] generating profile certs ...
	I0725 17:47:31.861800   23738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:47:31.861824   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16
	I0725 17:47:31.861838   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.253 192.168.39.254]
	I0725 17:47:31.960154   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 ...
	I0725 17:47:31.960181   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16: {Name:mk567cb329724f7d5be3ef9d2ac018eed8def8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.960345   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16 ...
	I0725 17:47:31.960358   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16: {Name:mke962a58894b471ea02d085e827bcbcccbc3ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.960426   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:47:31.960552   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:47:31.960674   23738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:47:31.960689   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:47:31.960700   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:47:31.960713   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:47:31.960725   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:47:31.960735   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:47:31.960747   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:47:31.960762   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:47:31.960774   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:47:31.960814   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:47:31.960840   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:47:31.960849   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:47:31.960871   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:47:31.960891   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:47:31.960913   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:47:31.960949   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:47:31.960974   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:31.960987   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:47:31.961001   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:47:31.961030   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:47:31.963724   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:31.964094   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:47:31.964124   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:31.964224   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:47:31.964431   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:47:31.964608   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:47:31.964727   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:47:32.040636   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 17:47:32.045700   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 17:47:32.058258   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 17:47:32.063158   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 17:47:32.073727   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 17:47:32.077838   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 17:47:32.089179   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 17:47:32.094387   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0725 17:47:32.106398   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 17:47:32.110441   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 17:47:32.120948   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 17:47:32.128154   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0725 17:47:32.140221   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:47:32.164723   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:47:32.187719   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:47:32.211502   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:47:32.233875   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0725 17:47:32.256196   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:47:32.277769   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:47:32.300660   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:47:32.323709   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:47:32.345708   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:47:32.367308   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:47:32.391779   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 17:47:32.406921   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 17:47:32.424209   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 17:47:32.440903   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0725 17:47:32.457558   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 17:47:32.472596   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0725 17:47:32.488844   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 17:47:32.504138   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:47:32.509438   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:47:32.521083   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.525346   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.525408   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.531006   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:47:32.542348   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:47:32.553546   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.557553   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.557601   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.562820   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:47:32.574273   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:47:32.585053   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.589196   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.589258   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.594782   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:47:32.605575   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:47:32.609467   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:47:32.609519   23738 kubeadm.go:934] updating node {m03 192.168.39.253 8443 v1.30.3 crio true true} ...
	I0725 17:47:32.609604   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:47:32.609635   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:47:32.609672   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:47:32.624865   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:47:32.624956   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:47:32.625018   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:47:32.635197   23738 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0725 17:47:32.635255   23738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0725 17:47:32.644188   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0725 17:47:32.644215   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:47:32.644275   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0725 17:47:32.644283   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:47:32.644297   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0725 17:47:32.644336   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:47:32.644339   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:47:32.644647   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:47:32.649064   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0725 17:47:32.649088   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0725 17:47:32.676867   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:47:32.676945   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0725 17:47:32.676971   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:47:32.676984   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0725 17:47:32.719424   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0725 17:47:32.719471   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0725 17:47:33.532625   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 17:47:33.541624   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:47:33.556660   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:47:33.574601   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:47:33.591732   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:47:33.595587   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:47:33.606947   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:33.733582   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:47:33.750463   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:47:33.750950   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:33.751005   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:33.769058   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0725 17:47:33.769470   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:33.769932   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:33.769953   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:33.770267   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:33.770603   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:47:33.770763   23738 start.go:317] joinCluster: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:47:33.770881   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0725 17:47:33.770901   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:47:33.773973   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:33.774466   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:47:33.774493   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:33.774629   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:47:33.774802   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:47:33.774976   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:47:33.775124   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:47:33.929157   23738 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:33.929205   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qeias8.mpk1vfnxbq293g06 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0725 17:47:57.541987   23738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qeias8.mpk1vfnxbq293g06 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (23.612751595s)
	I0725 17:47:57.542023   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0725 17:47:58.157111   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036-m03 minikube.k8s.io/updated_at=2024_07_25T17_47_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=false
	I0725 17:47:58.334203   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174036-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0725 17:47:58.447957   23738 start.go:319] duration metric: took 24.677188517s to joinCluster
	I0725 17:47:58.448028   23738 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:58.448412   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:58.449023   23738 out.go:177] * Verifying Kubernetes components...
	I0725 17:47:58.450589   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:58.698316   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:47:58.718280   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:47:58.718599   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 17:47:58.718695   23738 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0725 17:47:58.718886   23738 node_ready.go:35] waiting up to 6m0s for node "ha-174036-m03" to be "Ready" ...
	I0725 17:47:58.718969   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:58.718979   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:58.718990   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:58.718999   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:58.722618   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:59.219135   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:59.219158   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:59.219170   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:59.219174   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:59.222294   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:59.719212   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:59.719233   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:59.719243   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:59.719249   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:59.722731   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.219445   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:00.219469   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:00.219477   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:00.219481   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:00.222884   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.719440   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:00.719467   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:00.719480   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:00.719491   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:00.723146   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.723842   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:01.219773   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:01.219795   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:01.219805   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:01.219811   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:01.223120   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:01.719063   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:01.719084   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:01.719091   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:01.719097   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:01.722342   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:02.219353   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:02.219372   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:02.219381   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:02.219387   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:02.222592   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:02.719391   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:02.719418   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:02.719429   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:02.719435   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:02.723071   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:03.220025   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:03.220046   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:03.220054   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:03.220057   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:03.224214   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:03.224899   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:03.719264   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:03.719283   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:03.719304   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:03.719309   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:03.722967   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:04.219329   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:04.219349   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:04.219357   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:04.219362   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:04.223058   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:04.719975   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:04.720000   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:04.720010   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:04.720018   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:04.730270   23738 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0725 17:48:05.220079   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:05.220100   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:05.220110   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:05.220115   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:05.223432   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:05.719932   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:05.719953   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:05.719962   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:05.719967   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:05.723272   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:05.723770   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:06.219856   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:06.219878   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:06.219885   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:06.219890   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:06.223221   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:06.719324   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:06.719348   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:06.719356   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:06.719360   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:06.722895   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:07.219728   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:07.219752   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:07.219763   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:07.219769   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:07.223364   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:07.719181   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:07.719204   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:07.719211   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:07.719214   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:07.722485   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:08.219257   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:08.219308   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:08.219321   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:08.219328   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:08.222606   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:08.223098   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:08.719258   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:08.719279   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:08.719303   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:08.719313   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:08.723115   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:09.219392   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:09.219411   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:09.219419   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:09.219427   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:09.222531   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:09.719588   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:09.719613   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:09.719623   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:09.719654   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:09.722904   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:10.219781   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:10.219803   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:10.219814   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:10.219821   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:10.222896   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:10.223460   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:10.719656   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:10.719675   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:10.719683   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:10.719687   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:10.723441   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:11.219804   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:11.219828   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:11.219838   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:11.219845   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:11.223106   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:11.720078   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:11.720096   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:11.720104   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:11.720109   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:11.723524   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:12.219524   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:12.219545   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:12.219554   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:12.219557   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:12.223317   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:12.224194   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:12.719753   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:12.719781   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:12.719795   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:12.719800   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:12.722724   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:13.219797   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:13.219822   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:13.219835   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:13.219840   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:13.222932   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:13.719110   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:13.719136   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:13.719147   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:13.719153   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:13.722461   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.219093   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:14.219119   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:14.219132   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:14.219137   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:14.222878   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.719715   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:14.719742   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:14.719750   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:14.719754   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:14.723507   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.724092   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:15.219245   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:15.219262   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.219271   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.219275   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.222370   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:15.719932   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:15.719953   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.719961   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.719965   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.723368   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:15.723985   23738 node_ready.go:49] node "ha-174036-m03" has status "Ready":"True"
	I0725 17:48:15.724003   23738 node_ready.go:38] duration metric: took 17.005101402s for node "ha-174036-m03" to be "Ready" ...
	I0725 17:48:15.724011   23738 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:48:15.724074   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:15.724084   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.724091   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.724099   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.731583   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:15.738462   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.738534   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-flblg
	I0725 17:48:15.738543   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.738550   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.738553   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.741346   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.741863   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.741877   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.741884   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.741887   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.744242   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.744736   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.744751   23738 pod_ready.go:81] duration metric: took 6.267081ms for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.744759   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.744800   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vtr9p
	I0725 17:48:15.744807   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.744814   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.744821   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.746839   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.747321   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.747335   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.747345   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.747350   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.749391   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.749790   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.749807   23738 pod_ready.go:81] duration metric: took 5.041261ms for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.749818   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.749878   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036
	I0725 17:48:15.749887   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.749893   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.749901   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.751999   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.752590   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.752601   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.752609   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.752612   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.755103   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.755910   23738 pod_ready.go:92] pod "etcd-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.755928   23738 pod_ready.go:81] duration metric: took 6.103409ms for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.755945   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.755992   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m02
	I0725 17:48:15.755999   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.756006   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.756009   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.758199   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.758685   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:15.758698   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.758704   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.758713   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.760829   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.761259   23738 pod_ready.go:92] pod "etcd-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.761272   23738 pod_ready.go:81] duration metric: took 5.317765ms for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.761279   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.920658   23738 request.go:629] Waited for 159.333662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m03
	I0725 17:48:15.920744   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m03
	I0725 17:48:15.920750   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.920758   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.920764   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.924276   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.120649   23738 request.go:629] Waited for 195.365321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:16.120714   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:16.120722   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.120730   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.120736   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.124364   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.124865   23738 pod_ready.go:92] pod "etcd-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.124882   23738 pod_ready.go:81] duration metric: took 363.597449ms for pod "etcd-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.124897   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.321009   23738 request.go:629] Waited for 196.007507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:48:16.321059   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:48:16.321064   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.321070   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.321074   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.324418   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.520400   23738 request.go:629] Waited for 195.329584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:16.520449   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:16.520454   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.520482   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.520494   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.523544   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.524184   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.524207   23738 pod_ready.go:81] duration metric: took 399.30203ms for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.524221   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.720217   23738 request.go:629] Waited for 195.919181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:48:16.720295   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:48:16.720301   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.720309   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.720315   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.726447   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:48:16.920761   23738 request.go:629] Waited for 193.540089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:16.920846   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:16.920854   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.920862   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.920867   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.924432   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.925242   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.925261   23738 pod_ready.go:81] duration metric: took 401.032547ms for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.925271   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.120383   23738 request.go:629] Waited for 195.022228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m03
	I0725 17:48:17.120469   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m03
	I0725 17:48:17.120475   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.120482   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.120491   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.123936   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.319960   23738 request.go:629] Waited for 195.291804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:17.320011   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:17.320017   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.320024   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.320030   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.323598   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.324103   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:17.324120   23738 pod_ready.go:81] duration metric: took 398.839297ms for pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.324129   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.520346   23738 request.go:629] Waited for 196.124151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:48:17.520410   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:48:17.520416   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.520423   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.520427   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.523759   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.720985   23738 request.go:629] Waited for 196.496138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:17.721141   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:17.721156   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.721167   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.721178   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.728510   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:17.729883   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:17.729912   23738 pod_ready.go:81] duration metric: took 405.774903ms for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.729929   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.921047   23738 request.go:629] Waited for 191.006912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:48:17.921158   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:48:17.921166   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.921175   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.921180   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.924660   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.120741   23738 request.go:629] Waited for 195.355142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:18.120823   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:18.120831   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.120839   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.120847   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.124807   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.125897   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.125917   23738 pod_ready.go:81] duration metric: took 395.981033ms for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.125928   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.320946   23738 request.go:629] Waited for 194.947565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m03
	I0725 17:48:18.321034   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m03
	I0725 17:48:18.321045   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.321057   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.321065   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.325264   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:18.520737   23738 request.go:629] Waited for 194.3815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.520822   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.520832   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.520844   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.520853   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.524251   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.525009   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.525030   23738 pod_ready.go:81] duration metric: took 399.093257ms for pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.525044   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5klkv" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.720032   23738 request.go:629] Waited for 194.926984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5klkv
	I0725 17:48:18.720105   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5klkv
	I0725 17:48:18.720111   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.720118   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.720122   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.723688   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.920621   23738 request.go:629] Waited for 196.358054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.920711   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.920718   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.920727   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.920734   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.924836   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:18.925351   23738 pod_ready.go:92] pod "kube-proxy-5klkv" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.925371   23738 pod_ready.go:81] duration metric: took 400.32091ms for pod "kube-proxy-5klkv" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.925381   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.120398   23738 request.go:629] Waited for 194.943515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:48:19.120449   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:48:19.120454   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.120463   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.120468   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.124001   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.320374   23738 request.go:629] Waited for 195.386277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:19.320450   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:19.320470   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.320486   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.320490   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.324195   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.324867   23738 pod_ready.go:92] pod "kube-proxy-s6jdn" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:19.324886   23738 pod_ready.go:81] duration metric: took 399.499786ms for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.324896   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.520953   23738 request.go:629] Waited for 195.983035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:48:19.521027   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:48:19.521034   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.521045   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.521055   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.524663   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.720661   23738 request.go:629] Waited for 195.346701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:19.720717   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:19.720724   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.720772   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.720782   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.723887   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.724496   23738 pod_ready.go:92] pod "kube-proxy-xwvdm" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:19.724518   23738 pod_ready.go:81] duration metric: took 399.615118ms for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.725022   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.920853   23738 request.go:629] Waited for 195.756105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:48:19.920931   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:48:19.920943   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.920958   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.920965   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.924401   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.120652   23738 request.go:629] Waited for 195.254606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:20.120715   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:20.120722   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.120731   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.120738   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.124100   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.124783   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.124804   23738 pod_ready.go:81] duration metric: took 399.766469ms for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.124817   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.320407   23738 request.go:629] Waited for 195.516784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:48:20.320469   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:48:20.320475   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.320483   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.320487   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.323906   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.520612   23738 request.go:629] Waited for 195.929751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:20.520695   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:20.520719   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.520734   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.520745   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.524429   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.524924   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.524940   23738 pod_ready.go:81] duration metric: took 400.115378ms for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.524950   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.720056   23738 request.go:629] Waited for 195.03201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m03
	I0725 17:48:20.720144   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m03
	I0725 17:48:20.720156   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.720167   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.720176   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.723832   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.920081   23738 request.go:629] Waited for 195.035781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:20.920141   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:20.920146   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.920154   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.920157   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.923772   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.924230   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.924249   23738 pod_ready.go:81] duration metric: took 399.291088ms for pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.924263   23738 pod_ready.go:38] duration metric: took 5.200241533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:48:20.924283   23738 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:48:20.924365   23738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:48:20.939968   23738 api_server.go:72] duration metric: took 22.491903115s to wait for apiserver process to appear ...
	I0725 17:48:20.940000   23738 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:48:20.940023   23738 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0725 17:48:20.945387   23738 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0725 17:48:20.945467   23738 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0725 17:48:20.945476   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.945483   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.945490   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.946577   23738 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 17:48:20.946636   23738 api_server.go:141] control plane version: v1.30.3
	I0725 17:48:20.946649   23738 api_server.go:131] duration metric: took 6.642298ms to wait for apiserver health ...
	I0725 17:48:20.946657   23738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:48:21.120465   23738 request.go:629] Waited for 173.750714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.120533   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.120552   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.120564   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.120577   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.127448   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:48:21.134655   23738 system_pods.go:59] 24 kube-system pods found
	I0725 17:48:21.134690   23738 system_pods.go:61] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:48:21.134697   23738 system_pods.go:61] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:48:21.134702   23738 system_pods.go:61] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:48:21.134707   23738 system_pods.go:61] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:48:21.134712   23738 system_pods.go:61] "etcd-ha-174036-m03" [512972cb-1314-4a63-bbd7-2737a4338be3] Running
	I0725 17:48:21.134716   23738 system_pods.go:61] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:48:21.134721   23738 system_pods.go:61] "kindnet-fcznc" [795e29b8-1fad-47ca-bc4e-0809d4063a10] Running
	I0725 17:48:21.134725   23738 system_pods.go:61] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:48:21.134731   23738 system_pods.go:61] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:48:21.134735   23738 system_pods.go:61] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:48:21.134741   23738 system_pods.go:61] "kube-apiserver-ha-174036-m03" [08ade854-8ac6-45b0-a876-ca62d31c9382] Running
	I0725 17:48:21.134747   23738 system_pods.go:61] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:48:21.134758   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:48:21.134763   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m03" [e742a05b-ae60-4e7a-9f16-d7a9555423d5] Running
	I0725 17:48:21.134770   23738 system_pods.go:61] "kube-proxy-5klkv" [cc83bed2-4af8-4de2-ac28-f9b62e75297b] Running
	I0725 17:48:21.134775   23738 system_pods.go:61] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:48:21.134783   23738 system_pods.go:61] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:48:21.134789   23738 system_pods.go:61] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:48:21.134797   23738 system_pods.go:61] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:48:21.134802   23738 system_pods.go:61] "kube-scheduler-ha-174036-m03" [a922c6b3-064b-48e7-b43c-5d46df954b5c] Running
	I0725 17:48:21.134809   23738 system_pods.go:61] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:48:21.134813   23738 system_pods.go:61] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:48:21.134820   23738 system_pods.go:61] "kube-vip-ha-174036-m03" [ca677d83-2054-428e-aa5c-d95b15a57e1d] Running
	I0725 17:48:21.134825   23738 system_pods.go:61] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:48:21.134834   23738 system_pods.go:74] duration metric: took 188.171619ms to wait for pod list to return data ...
	I0725 17:48:21.134846   23738 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:48:21.320278   23738 request.go:629] Waited for 185.344351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:48:21.320366   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:48:21.320374   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.320384   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.320394   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.323682   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:21.323783   23738 default_sa.go:45] found service account: "default"
	I0725 17:48:21.323797   23738 default_sa.go:55] duration metric: took 188.941633ms for default service account to be created ...
	I0725 17:48:21.323805   23738 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:48:21.520189   23738 request.go:629] Waited for 196.302125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.520260   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.520270   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.520277   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.520284   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.527636   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:21.533809   23738 system_pods.go:86] 24 kube-system pods found
	I0725 17:48:21.533834   23738 system_pods.go:89] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:48:21.533839   23738 system_pods.go:89] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:48:21.533843   23738 system_pods.go:89] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:48:21.533848   23738 system_pods.go:89] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:48:21.533852   23738 system_pods.go:89] "etcd-ha-174036-m03" [512972cb-1314-4a63-bbd7-2737a4338be3] Running
	I0725 17:48:21.533856   23738 system_pods.go:89] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:48:21.533860   23738 system_pods.go:89] "kindnet-fcznc" [795e29b8-1fad-47ca-bc4e-0809d4063a10] Running
	I0725 17:48:21.533864   23738 system_pods.go:89] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:48:21.533869   23738 system_pods.go:89] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:48:21.533873   23738 system_pods.go:89] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:48:21.533877   23738 system_pods.go:89] "kube-apiserver-ha-174036-m03" [08ade854-8ac6-45b0-a876-ca62d31c9382] Running
	I0725 17:48:21.533881   23738 system_pods.go:89] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:48:21.533889   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:48:21.533893   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m03" [e742a05b-ae60-4e7a-9f16-d7a9555423d5] Running
	I0725 17:48:21.533899   23738 system_pods.go:89] "kube-proxy-5klkv" [cc83bed2-4af8-4de2-ac28-f9b62e75297b] Running
	I0725 17:48:21.533903   23738 system_pods.go:89] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:48:21.533909   23738 system_pods.go:89] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:48:21.533913   23738 system_pods.go:89] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:48:21.533917   23738 system_pods.go:89] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:48:21.533921   23738 system_pods.go:89] "kube-scheduler-ha-174036-m03" [a922c6b3-064b-48e7-b43c-5d46df954b5c] Running
	I0725 17:48:21.533927   23738 system_pods.go:89] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:48:21.533930   23738 system_pods.go:89] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:48:21.533935   23738 system_pods.go:89] "kube-vip-ha-174036-m03" [ca677d83-2054-428e-aa5c-d95b15a57e1d] Running
	I0725 17:48:21.533939   23738 system_pods.go:89] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:48:21.533945   23738 system_pods.go:126] duration metric: took 210.135527ms to wait for k8s-apps to be running ...
	I0725 17:48:21.533953   23738 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:48:21.533995   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:48:21.550490   23738 system_svc.go:56] duration metric: took 16.524706ms WaitForService to wait for kubelet
	I0725 17:48:21.550515   23738 kubeadm.go:582] duration metric: took 23.102455476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:48:21.550534   23738 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:48:21.720962   23738 request.go:629] Waited for 170.343393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0725 17:48:21.721016   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0725 17:48:21.721021   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.721029   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.721033   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.724763   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:21.725594   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725612   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725625   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725629   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725634   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725638   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725644   23738 node_conditions.go:105] duration metric: took 175.104521ms to run NodePressure ...
	I0725 17:48:21.725659   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:48:21.725687   23738 start.go:255] writing updated cluster config ...
	I0725 17:48:21.725957   23738 ssh_runner.go:195] Run: rm -f paused
	I0725 17:48:21.779194   23738 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 17:48:21.781309   23738 out.go:177] * Done! kubectl is now configured to use "ha-174036" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.240475388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13ba7011-cb6c-4ea4-b386-963ee883b9ba name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.241825612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cdc6220-e621-4ec6-bb8e-c1764fd3ae56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.242339812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929920242318350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cdc6220-e621-4ec6-bb8e-c1764fd3ae56 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.242806706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a48c9679-859a-4d5b-a22a-715c8d13b021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.242893511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a48c9679-859a-4d5b-a22a-715c8d13b021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.243124115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a48c9679-859a-4d5b-a22a-715c8d13b021 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.244041555Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=aafddefb-995d-46fa-8c53-f4e78724b28c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.244545432Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-2mwrb,Uid:e874d68f-5f06-44af-882d-fb479da5a101,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929703024414156,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T17:48:22.685426680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c9354422-69ff-4676-80d1-4940badf9b4e,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1721929571205035297,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-25T17:46:10.881468190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-flblg,Uid:94857bc1-d7ba-466b-91d7-e2d5041159f2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929571201569159,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T17:46:10.883423990Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vtr9p,Uid:fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1721929571177926011,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T17:46:10.871668979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&PodSandboxMetadata{Name:kube-proxy-s6jdn,Uid:f13b463b-f7f9-4b49-8e29-209cb153a6e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929554827690417,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-25T17:45:54.511907526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&PodSandboxMetadata{Name:kindnet-2c2n8,Uid:c8ed79cb-52d7-4dfa-a3a0-02329169d86c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929554799839559,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T17:45:54.491495577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-174036,Uid:1a684b92a47207375cde77b0049b934b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1721929534397249013,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.165:8443,kubernetes.io/config.hash: 1a684b92a47207375cde77b0049b934b,kubernetes.io/config.seen: 2024-07-25T17:45:33.914251540Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-174036,Uid:1bb0a62f4a501312f477c94c22d0cf69,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929534388860528,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a5
01312f477c94c22d0cf69,},Annotations:map[string]string{kubernetes.io/config.hash: 1bb0a62f4a501312f477c94c22d0cf69,kubernetes.io/config.seen: 2024-07-25T17:45:33.914249515Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&PodSandboxMetadata{Name:etcd-ha-174036,Uid:243af717eadb4d61aadfedd2ed2a3083,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929534385401564,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.165:2379,kubernetes.io/config.hash: 243af717eadb4d61aadfedd2ed2a3083,kubernetes.io/config.seen: 2024-07-25T17:45:33.914250484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:792a8f45313d0d458ca4
530da219308841bfa0805526bd53c725b5058370a264,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-174036,Uid:5b29243a17ab88a279707af48677c8a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929534383591054,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5b29243a17ab88a279707af48677c8a9,kubernetes.io/config.seen: 2024-07-25T17:45:33.914248518Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-174036,Uid:60b0c4b255cf168fc0ff6e1b5b5a5e1f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721929534381734518,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,kubernetes.io/config.seen: 2024-07-25T17:45:33.914244433Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=aafddefb-995d-46fa-8c53-f4e78724b28c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.246970473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31854f64-02ee-4413-b211-f36edb8ea329 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.247048516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31854f64-02ee-4413-b211-f36edb8ea329 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.247359851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31854f64-02ee-4413-b211-f36edb8ea329 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.288040302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83fdf60f-eb41-40bb-bfaf-bb97af961da3 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.288160511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83fdf60f-eb41-40bb-bfaf-bb97af961da3 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.289672859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a43192a7-1de7-4624-ae7a-44f79a8231c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.290327496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929920290294184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a43192a7-1de7-4624-ae7a-44f79a8231c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.291031720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86135211-8e42-4778-89e0-cd086d86018b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.291116105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86135211-8e42-4778-89e0-cd086d86018b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.291506355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86135211-8e42-4778-89e0-cd086d86018b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.331359709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f68ee70-36b4-431f-b7fa-7f3bf9178c16 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.331438019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f68ee70-36b4-431f-b7fa-7f3bf9178c16 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.332621387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9c9b801-cfeb-47da-a1c2-1219b80ca17f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.333137348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929920333109025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9c9b801-cfeb-47da-a1c2-1219b80ca17f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.333881708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3895990a-2bcc-4abe-83e5-05721c5ba839 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.333957828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3895990a-2bcc-4abe-83e5-05721c5ba839 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:00 ha-174036 crio[682]: time="2024-07-25 17:52:00.334199901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3895990a-2bcc-4abe-83e5-05721c5ba839 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bbb36d42911b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c949824afb5f4       busybox-fc5497c4f-2mwrb
	0110c72f3cc1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   9bb7062a78b83       coredns-7db6d8ff4d-flblg
	35b4910d2dffd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   95f27d4d38116       storage-provisioner
	7faf8fe41b978       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   77a88d259037c       coredns-7db6d8ff4d-vtr9p
	fe8ee70c5b693       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   08e5a1f0a23d2       kindnet-2c2n8
	3afce6c1101d6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   c399536e97e26       kube-proxy-s6jdn
	a61b54c041838       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   d403d51e1490c       kube-vip-ha-174036
	0c7004ab2454d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   9e04e99a376a1       kube-apiserver-ha-174036
	5de803e0d40d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   18925eee7f455       etcd-ha-174036
	fe2d3acd60c40       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   792a8f45313d0       kube-scheduler-ha-174036
	26c724f452769       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   209f2e15348a2       kube-controller-manager-ha-174036
	
	
	==> coredns [0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f] <==
	[INFO] 10.244.1.2:48378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001981322s
	[INFO] 10.244.0.4:57743 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237286s
	[INFO] 10.244.0.4:35821 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009454s
	[INFO] 10.244.0.4:56762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015961s
	[INFO] 10.244.0.4:33710 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011041s
	[INFO] 10.244.0.4:39222 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091598s
	[INFO] 10.244.2.2:35849 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163406s
	[INFO] 10.244.2.2:58585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001474924s
	[INFO] 10.244.2.2:43739 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099316s
	[INFO] 10.244.1.2:50301 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00197463s
	[INFO] 10.244.1.2:57934 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587617s
	[INFO] 10.244.1.2:46902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144867s
	[INFO] 10.244.1.2:45033 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024148s
	[INFO] 10.244.0.4:39933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007593s
	[INFO] 10.244.0.4:56548 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135774s
	[INFO] 10.244.2.2:37400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145773s
	[INFO] 10.244.2.2:35387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008288s
	[INFO] 10.244.2.2:51951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060263s
	[INFO] 10.244.0.4:35903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.0.4:47190 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168947s
	[INFO] 10.244.2.2:57705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000173851s
	[INFO] 10.244.1.2:46849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111229s
	[INFO] 10.244.1.2:45248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080498s
	[INFO] 10.244.1.2:34246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112642s
	[INFO] 10.244.1.2:60449 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082776s
	
	
	==> coredns [7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f] <==
	[INFO] 10.244.2.2:51239 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001453951s
	[INFO] 10.244.0.4:47955 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140363s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003632272s
	[INFO] 10.244.0.4:50546 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003351953s
	[INFO] 10.244.2.2:39311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129834s
	[INFO] 10.244.2.2:46828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001959216s
	[INFO] 10.244.2.2:50785 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205115s
	[INFO] 10.244.2.2:60376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134751s
	[INFO] 10.244.2.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185565s
	[INFO] 10.244.1.2:33441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154369s
	[INFO] 10.244.1.2:48932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095106s
	[INFO] 10.244.1.2:57921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014197s
	[INFO] 10.244.1.2:36171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087145s
	[INFO] 10.244.0.4:34307 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088823s
	[INFO] 10.244.0.4:57061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114297s
	[INFO] 10.244.2.2:54914 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000215592s
	[INFO] 10.244.1.2:41895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148191s
	[INFO] 10.244.1.2:43543 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125877s
	[INFO] 10.244.1.2:60822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099959s
	[INFO] 10.244.1.2:55371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085133s
	[INFO] 10.244.0.4:60792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135863s
	[INFO] 10.244.0.4:34176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000198465s
	[INFO] 10.244.2.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196507s
	[INFO] 10.244.2.2:49323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179955s
	[INFO] 10.244.2.2:55358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098973s
	
	
	==> describe nodes <==
	Name:               ha-174036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:46:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-174036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1be020ed9784dbcb9721764c32b616e
	  System UUID:                a1be020e-d978-4dbc-b972-1764c32b616e
	  Boot ID:                    96d25b24-9958-4e84-b55d-0be006e0dab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2mwrb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-flblg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 coredns-7db6d8ff4d-vtr9p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 etcd-ha-174036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-2c2n8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-174036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-174036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-proxy-s6jdn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-174036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-vip-ha-174036                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m5s   kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node ha-174036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node ha-174036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node ha-174036 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s   node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal  NodeReady                5m50s  kubelet          Node ha-174036 status is now: NodeReady
	  Normal  RegisteredNode           5m3s   node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal  RegisteredNode           3m49s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	
	
	Name:               ha-174036-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:49:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-174036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8093ac6d205c434d94cbb70f3b2823ae
	  System UUID:                8093ac6d-205c-434d-94cb-b70f3b2823ae
	  Boot ID:                    2e13db07-8ea1-42a3-acad-03ad7606d62e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wtxzv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-174036-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-k4d8x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m20s
	  kube-system                 kube-apiserver-ha-174036-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-ha-174036-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-proxy-xwvdm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-ha-174036-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-174036-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           5m3s                   node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-174036-m02 status is now: NodeNotReady
	
	
	Name:               ha-174036-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_47_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-174036-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45503b4610a245398fdd1551d18f3934
	  System UUID:                45503b46-10a2-4539-8fdd-1551d18f3934
	  Boot ID:                    7ed5b409-9367-4265-9cd0-e00584c888dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qqdtg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-174036-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kindnet-fcznc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ha-174036-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-controller-manager-ha-174036-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-5klkv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ha-174036-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-vip-ha-174036-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m6s (x2 over 4m6s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x2 over 4m6s)  kubelet          Node ha-174036-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x2 over 4m6s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  RegisteredNode           3m49s                node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  NodeReady                3m45s                kubelet          Node ha-174036-m03 status is now: NodeReady
	
	
	Name:               ha-174036-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_49_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:51:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-174036-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccffe731755d4ecfa1441a8d697922a2
	  System UUID:                ccffe731-755d-4ecf-a144-1a8d697922a2
	  Boot ID:                    52b58166-644f-492f-aee4-24a775481797
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvhcw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-cvcj9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  NodeReady                2m40s                kubelet          Node ha-174036-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul25 17:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050092] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036800] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.842811] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.842958] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.777476] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056188] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.174852] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114710] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.260280] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.890204] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.211746] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064261] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251761] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.094069] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.327144] kauditd_printk_skb: 21 callbacks suppressed
	[Jul25 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +46.764801] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9] <==
	{"level":"warn","ts":"2024-07-25T17:52:00.60922Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.617359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.626598Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.628181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.634359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.637586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.640705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.646744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.648328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.657074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.657347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.659413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.664059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.664262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.666927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.671223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.679282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.685445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.69186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.694965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.697472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.702671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.709631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.715356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:00.727731Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:52:00 up 6 min,  0 users,  load average: 0.20, 0.33, 0.18
	Linux ha-174036 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad] <==
	I0725 17:51:30.458231       1 main.go:299] handling current node
	I0725 17:51:40.460161       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:51:40.460317       1 main.go:299] handling current node
	I0725 17:51:40.460355       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:51:40.460377       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:40.460617       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:51:40.460674       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:40.460866       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:51:40.460915       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:50.451255       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:51:50.451327       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:51:50.451547       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:51:50.451579       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:51:50.451687       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:51:50.451714       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:51:50.451898       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:51:50.451926       1 main.go:299] handling current node
	I0725 17:52:00.452176       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:52:00.452230       1 main.go:299] handling current node
	I0725 17:52:00.452246       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:52:00.452252       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:52:00.452412       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:52:00.452417       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:52:00.452535       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:52:00.452541       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd] <==
	I0725 17:45:40.873250       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0725 17:45:40.884191       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 17:45:54.365532       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0725 17:45:54.366410       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0725 17:47:55.506264       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0725 17:47:55.506349       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0725 17:47:55.506391       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.079µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0725 17:47:55.507645       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0725 17:47:55.507913       1 timeout.go:142] post-timeout activity - time-elapsed: 1.867844ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0725 17:48:27.240625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44802: use of closed network connection
	E0725 17:48:27.430926       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44818: use of closed network connection
	E0725 17:48:27.618221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44840: use of closed network connection
	E0725 17:48:27.812599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35156: use of closed network connection
	E0725 17:48:27.997664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35176: use of closed network connection
	E0725 17:48:28.186932       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35184: use of closed network connection
	E0725 17:48:28.365554       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35206: use of closed network connection
	E0725 17:48:28.541904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35228: use of closed network connection
	E0725 17:48:28.727648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35240: use of closed network connection
	E0725 17:48:29.022530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35264: use of closed network connection
	E0725 17:48:29.210094       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35286: use of closed network connection
	E0725 17:48:29.384014       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35304: use of closed network connection
	E0725 17:48:29.573048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35324: use of closed network connection
	E0725 17:48:29.749375       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35338: use of closed network connection
	E0725 17:48:29.927249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35366: use of closed network connection
	W0725 17:49:59.414080       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.253]
	
	
	==> kube-controller-manager [26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526] <==
	I0725 17:48:22.726654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.949µs"
	I0725 17:48:22.738601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.57µs"
	I0725 17:48:22.747376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.439µs"
	I0725 17:48:22.846392       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.452072ms"
	I0725 17:48:23.045407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.789744ms"
	E0725 17:48:23.045453       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:48:23.045563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.763µs"
	I0725 17:48:23.054219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.155µs"
	I0725 17:48:24.566087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.296µs"
	I0725 17:48:26.074925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.573644ms"
	I0725 17:48:26.075089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.274µs"
	I0725 17:48:26.500352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.838482ms"
	I0725 17:48:26.500534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.371µs"
	I0725 17:48:26.781513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.454496ms"
	I0725 17:48:26.781832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.791µs"
	E0725 17:48:59.136629       1 certificate_controller.go:146] Sync csr-t6bk2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-t6bk2": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:48:59.424658       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-174036-m04\" does not exist"
	I0725 17:48:59.449172       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174036-m04" podCIDRs=["10.244.3.0/24"]
	E0725 17:48:59.623621       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"43516f2f-60db-4965-95c3-016e6e19e643", ResourceVersion:"914", Generation:1, CreationTimestamp:time.Date(2024, time.July, 25, 17, 45, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\
":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240719-e7903573\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostP
ath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001fd2260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", Vo
lumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1
.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eaf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Down
wardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eb10), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.IS
CSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Containe
r{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240719-e7903573", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001fd2280)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001fd22c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:res
ource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(
*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002904660), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002841ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e9b200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, H
ostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00284e7b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00287c040)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on
daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:49:04.284449       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174036-m04"
	I0725 17:49:20.223473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174036-m04"
	I0725 17:50:16.736252       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174036-m04"
	I0725 17:50:16.781874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.394571ms"
	I0725 17:50:16.781990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.729µs"
	
	
	==> kube-proxy [3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136] <==
	I0725 17:45:55.265656       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:45:55.282546       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0725 17:45:55.319637       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:45:55.319680       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:45:55.319698       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:45:55.322168       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:45:55.322638       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:45:55.322679       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:45:55.324284       1 config.go:192] "Starting service config controller"
	I0725 17:45:55.324455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:45:55.324496       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:45:55.324512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:45:55.325124       1 config.go:319] "Starting node config controller"
	I0725 17:45:55.325176       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:45:55.425218       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:45:55.425237       1 shared_informer.go:320] Caches are synced for node config
	I0725 17:45:55.425360       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002] <==
	W0725 17:45:38.783465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:45:38.783504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:45:38.788579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 17:45:38.788624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 17:45:38.881489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 17:45:38.881533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:45:38.900961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:45:38.901040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:45:38.929232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:45:38.929278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:45:38.941407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:45:38.941461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0725 17:45:40.791158       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 17:47:54.826283       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5klkv\": pod kube-proxy-5klkv is already assigned to node \"ha-174036-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5klkv" node="ha-174036-m03"
	E0725 17:47:54.828928       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cc83bed2-4af8-4de2-ac28-f9b62e75297b(kube-system/kube-proxy-5klkv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5klkv"
	E0725 17:47:54.829145       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5klkv\": pod kube-proxy-5klkv is already assigned to node \"ha-174036-m03\"" pod="kube-system/kube-proxy-5klkv"
	I0725 17:47:54.829246       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5klkv" node="ha-174036-m03"
	E0725 17:48:22.692298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wtxzv\": pod busybox-fc5497c4f-wtxzv is already assigned to node \"ha-174036-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wtxzv" node="ha-174036-m02"
	E0725 17:48:22.692366       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 93b566c1-d54b-4740-a5ce-777a73656d9a(default/busybox-fc5497c4f-wtxzv) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wtxzv"
	E0725 17:48:22.692380       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wtxzv\": pod busybox-fc5497c4f-wtxzv is already assigned to node \"ha-174036-m02\"" pod="default/busybox-fc5497c4f-wtxzv"
	I0725 17:48:22.692408       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wtxzv" node="ha-174036-m02"
	E0725 17:48:59.526924       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bvhcw\": pod kindnet-bvhcw is already assigned to node \"ha-174036-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bvhcw" node="ha-174036-m04"
	E0725 17:48:59.527112       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3353f0f7-eee0-42c7-aaef-d495f721b520(kube-system/kindnet-bvhcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bvhcw"
	E0725 17:48:59.527151       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bvhcw\": pod kindnet-bvhcw is already assigned to node \"ha-174036-m04\"" pod="kube-system/kindnet-bvhcw"
	I0725 17:48:59.527190       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bvhcw" node="ha-174036-m04"
	
	
	==> kubelet <==
	Jul 25 17:47:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:47:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:48:22 ha-174036 kubelet[1362]: I0725 17:48:22.685737    1362 topology_manager.go:215] "Topology Admit Handler" podUID="e874d68f-5f06-44af-882d-fb479da5a101" podNamespace="default" podName="busybox-fc5497c4f-2mwrb"
	Jul 25 17:48:22 ha-174036 kubelet[1362]: I0725 17:48:22.841253    1362 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbxk\" (UniqueName: \"kubernetes.io/projected/e874d68f-5f06-44af-882d-fb479da5a101-kube-api-access-lmbxk\") pod \"busybox-fc5497c4f-2mwrb\" (UID: \"e874d68f-5f06-44af-882d-fb479da5a101\") " pod="default/busybox-fc5497c4f-2mwrb"
	Jul 25 17:48:26 ha-174036 kubelet[1362]: I0725 17:48:26.471235    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-2mwrb" podStartSLOduration=1.891520415 podStartE2EDuration="4.471183839s" podCreationTimestamp="2024-07-25 17:48:22 +0000 UTC" firstStartedPulling="2024-07-25 17:48:23.269029074 +0000 UTC m=+162.621301155" lastFinishedPulling="2024-07-25 17:48:25.848692487 +0000 UTC m=+165.200964579" observedRunningTime="2024-07-25 17:48:26.470233675 +0000 UTC m=+165.822505775" watchObservedRunningTime="2024-07-25 17:48:26.471183839 +0000 UTC m=+165.823455940"
	Jul 25 17:48:40 ha-174036 kubelet[1362]: E0725 17:48:40.851156    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:48:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:49:40 ha-174036 kubelet[1362]: E0725 17:49:40.850905    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:49:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:50:40 ha-174036 kubelet[1362]: E0725 17:50:40.851812    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:50:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:51:40 ha-174036 kubelet[1362]: E0725 17:51:40.850553    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:51:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174036 -n ha-174036
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (3.206161917s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:05.276551   28580 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:05.276792   28580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:05.276801   28580 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:05.276805   28580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:05.277040   28580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:05.277201   28580 out.go:298] Setting JSON to false
	I0725 17:52:05.277227   28580 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:05.277310   28580 notify.go:220] Checking for updates...
	I0725 17:52:05.277622   28580 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:05.277637   28580 status.go:255] checking status of ha-174036 ...
	I0725 17:52:05.278075   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.278149   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.296217   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0725 17:52:05.296624   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.297225   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.297246   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.297659   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.297931   28580 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:05.299475   28580 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:05.299491   28580 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:05.299751   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.299779   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.314060   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0725 17:52:05.314417   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.314843   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.314874   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.315194   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.315370   28580 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:05.318164   28580 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:05.318743   28580 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:05.318768   28580 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:05.318995   28580 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:05.319413   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.319458   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.333844   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0725 17:52:05.334148   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.334629   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.334643   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.334918   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.335091   28580 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:05.335281   28580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:05.335310   28580 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:05.337848   28580 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:05.338242   28580 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:05.338265   28580 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:05.338423   28580 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:05.338624   28580 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:05.338886   28580 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:05.339047   28580 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:05.419411   28580 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:05.424962   28580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:05.441502   28580 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:05.441531   28580 api_server.go:166] Checking apiserver status ...
	I0725 17:52:05.441584   28580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:05.455525   28580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:05.464525   28580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:05.464604   28580 ssh_runner.go:195] Run: ls
	I0725 17:52:05.468516   28580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:05.473887   28580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:05.473909   28580 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:05.473921   28580 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:05.473944   28580 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:05.474233   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.474275   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.489529   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I0725 17:52:05.489958   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.490475   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.490495   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.490791   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.490978   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:05.492474   28580 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:05.492491   28580 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:05.492858   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.492898   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.507380   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0725 17:52:05.507825   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.508246   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.508266   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.508553   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.508728   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:05.511215   28580 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:05.511630   28580 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:05.511655   28580 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:05.511815   28580 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:05.512189   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:05.512228   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:05.526102   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
	I0725 17:52:05.526507   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:05.526947   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:05.526968   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:05.527230   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:05.527416   28580 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:05.527617   28580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:05.527640   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:05.530379   28580 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:05.530842   28580 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:05.530876   28580 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:05.531009   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:05.531153   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:05.531279   28580 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:05.531390   28580 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:08.096672   28580 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:08.096766   28580 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:08.096809   28580 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:08.096827   28580 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:08.096851   28580 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:08.096865   28580 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:08.097184   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.097243   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.112163   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0725 17:52:08.112670   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.113120   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.113143   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.113536   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.113823   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:08.115683   28580 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:08.115702   28580 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:08.115977   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.116009   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.130417   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44547
	I0725 17:52:08.130872   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.131314   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.131333   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.131697   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.131922   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:08.134737   28580 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:08.135380   28580 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:08.135407   28580 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:08.135601   28580 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:08.135913   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.135947   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.151719   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0725 17:52:08.152131   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.152751   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.152772   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.153059   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.153295   28580 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:08.153481   28580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:08.153505   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:08.156201   28580 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:08.156651   28580 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:08.156671   28580 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:08.156823   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:08.157015   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:08.157148   28580 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:08.157302   28580 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:08.239458   28580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:08.254692   28580 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:08.254717   28580 api_server.go:166] Checking apiserver status ...
	I0725 17:52:08.254749   28580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:08.270466   28580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:08.280519   28580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:08.280585   28580 ssh_runner.go:195] Run: ls
	I0725 17:52:08.284951   28580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:08.289413   28580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:08.289435   28580 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:08.289452   28580 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:08.289469   28580 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:08.289845   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.289889   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.306444   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41271
	I0725 17:52:08.306840   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.307384   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.307405   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.307688   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.307868   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:08.309357   28580 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:08.309372   28580 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:08.309757   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.309799   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.324795   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0725 17:52:08.325211   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.325759   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.325779   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.326141   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.326342   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:08.329307   28580 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:08.329784   28580 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:08.329805   28580 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:08.329964   28580 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:08.330338   28580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:08.330377   28580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:08.344560   28580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0725 17:52:08.345005   28580 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:08.345489   28580 main.go:141] libmachine: Using API Version  1
	I0725 17:52:08.345533   28580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:08.345843   28580 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:08.346037   28580 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:08.346249   28580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:08.346267   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:08.348851   28580 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:08.349207   28580 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:08.349228   28580 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:08.349348   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:08.349534   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:08.349673   28580 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:08.349769   28580 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:08.427550   28580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:08.440534   28580 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (5.158928161s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:09.470231   28680 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:09.470358   28680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:09.470369   28680 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:09.470376   28680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:09.470548   28680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:09.470717   28680 out.go:298] Setting JSON to false
	I0725 17:52:09.470747   28680 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:09.470857   28680 notify.go:220] Checking for updates...
	I0725 17:52:09.471189   28680 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:09.471209   28680 status.go:255] checking status of ha-174036 ...
	I0725 17:52:09.471603   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.471667   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.487314   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0725 17:52:09.487737   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.488282   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.488317   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.488701   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.488922   28680 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:09.490558   28680 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:09.490573   28680 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:09.490857   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.490886   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.506845   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44935
	I0725 17:52:09.507199   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.507610   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.507631   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.507971   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.508165   28680 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:09.510666   28680 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:09.511011   28680 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:09.511055   28680 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:09.511147   28680 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:09.511444   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.511475   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.525917   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0725 17:52:09.526257   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.526830   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.526849   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.527176   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.527370   28680 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:09.527598   28680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:09.527633   28680 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:09.530453   28680 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:09.530910   28680 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:09.530943   28680 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:09.531093   28680 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:09.531241   28680 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:09.531424   28680 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:09.531631   28680 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:09.611525   28680 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:09.617154   28680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:09.635046   28680 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:09.635071   28680 api_server.go:166] Checking apiserver status ...
	I0725 17:52:09.635104   28680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:09.650091   28680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:09.659890   28680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:09.659946   28680 ssh_runner.go:195] Run: ls
	I0725 17:52:09.664060   28680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:09.668273   28680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:09.668305   28680 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:09.668315   28680 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:09.668354   28680 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:09.668622   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.668652   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.683040   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0725 17:52:09.683553   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.683990   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.684006   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.684347   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.684548   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:09.686022   28680 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:09.686034   28680 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:09.686358   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.686393   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.700603   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0725 17:52:09.701032   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.701452   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.701471   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.701768   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.701937   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:09.704379   28680 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:09.704842   28680 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:09.704863   28680 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:09.704985   28680 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:09.705321   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:09.705369   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:09.719549   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0725 17:52:09.719914   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:09.720376   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:09.720398   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:09.720663   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:09.720857   28680 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:09.721080   28680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:09.721101   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:09.723766   28680 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:09.724185   28680 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:09.724210   28680 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:09.724360   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:09.724503   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:09.724609   28680 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:09.724746   28680 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:11.168637   28680 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:11.168687   28680 retry.go:31] will retry after 374.839053ms: dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:14.240740   28680 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:14.240864   28680 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:14.240884   28680 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:14.240891   28680 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:14.240919   28680 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:14.240928   28680 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:14.241298   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.241399   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.256515   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0725 17:52:14.256999   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.257612   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.257638   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.257979   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.258176   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:14.259832   28680 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:14.259849   28680 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:14.260208   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.260260   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.275966   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38999
	I0725 17:52:14.276465   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.276963   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.276988   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.277296   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.277468   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:14.280678   28680 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:14.281079   28680 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:14.281103   28680 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:14.281368   28680 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:14.281694   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.281745   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.296897   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0725 17:52:14.297353   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.297870   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.297898   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.298198   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.298391   28680 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:14.298573   28680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:14.298603   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:14.301805   28680 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:14.302272   28680 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:14.302306   28680 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:14.302460   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:14.302646   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:14.302800   28680 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:14.302939   28680 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:14.379389   28680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:14.394551   28680 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:14.394578   28680 api_server.go:166] Checking apiserver status ...
	I0725 17:52:14.394611   28680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:14.407867   28680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:14.418825   28680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:14.418882   28680 ssh_runner.go:195] Run: ls
	I0725 17:52:14.422851   28680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:14.428890   28680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:14.428914   28680 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:14.428922   28680 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:14.428938   28680 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:14.429320   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.429358   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.445665   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0725 17:52:14.446200   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.446764   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.446784   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.447109   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.447322   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:14.449186   28680 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:14.449202   28680 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:14.449488   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.449526   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.464500   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38997
	I0725 17:52:14.464875   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.465315   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.465337   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.465684   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.465878   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:14.468480   28680 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:14.468908   28680 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:14.468943   28680 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:14.469034   28680 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:14.469344   28680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:14.469383   28680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:14.484350   28680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0725 17:52:14.484794   28680 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:14.485232   28680 main.go:141] libmachine: Using API Version  1
	I0725 17:52:14.485251   28680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:14.485567   28680 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:14.485728   28680 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:14.485905   28680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:14.485932   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:14.489097   28680 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:14.489646   28680 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:14.489673   28680 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:14.489835   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:14.490015   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:14.490179   28680 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:14.490348   28680 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:14.567066   28680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:14.584029   28680 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (4.956814302s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:16.081719   28795 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:16.081938   28795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:16.081946   28795 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:16.081951   28795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:16.082144   28795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:16.082348   28795 out.go:298] Setting JSON to false
	I0725 17:52:16.082373   28795 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:16.082427   28795 notify.go:220] Checking for updates...
	I0725 17:52:16.082835   28795 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:16.082854   28795 status.go:255] checking status of ha-174036 ...
	I0725 17:52:16.083267   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.083320   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.097987   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0725 17:52:16.098494   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.099054   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.099080   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.099607   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.099819   28795 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:16.101474   28795 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:16.101490   28795 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:16.101902   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.101960   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.117533   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0725 17:52:16.117883   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.118345   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.118367   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.118791   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.118974   28795 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:16.122127   28795 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:16.122658   28795 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:16.122686   28795 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:16.122837   28795 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:16.123224   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.123266   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.138024   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0725 17:52:16.138415   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.138866   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.138887   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.139175   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.139367   28795 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:16.139555   28795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:16.139577   28795 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:16.142336   28795 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:16.142758   28795 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:16.142793   28795 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:16.142926   28795 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:16.143133   28795 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:16.143370   28795 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:16.143516   28795 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:16.223932   28795 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:16.235128   28795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:16.251506   28795 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:16.251531   28795 api_server.go:166] Checking apiserver status ...
	I0725 17:52:16.251575   28795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:16.265846   28795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:16.275135   28795 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:16.275196   28795 ssh_runner.go:195] Run: ls
	I0725 17:52:16.279329   28795 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:16.283248   28795 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:16.283275   28795 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:16.283284   28795 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:16.283301   28795 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:16.283681   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.283716   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.298718   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40373
	I0725 17:52:16.299054   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.299504   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.299522   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.299864   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.300052   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:16.301679   28795 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:16.301696   28795 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:16.301995   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.302036   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.316516   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36325
	I0725 17:52:16.316944   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.317398   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.317418   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.317726   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.317930   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:16.320604   28795 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:16.321035   28795 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:16.321061   28795 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:16.321166   28795 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:16.321462   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:16.321511   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:16.336810   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34145
	I0725 17:52:16.337154   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:16.337610   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:16.337629   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:16.337987   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:16.338186   28795 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:16.338403   28795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:16.338424   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:16.341144   28795 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:16.341589   28795 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:16.341615   28795 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:16.341725   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:16.341873   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:16.342006   28795 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:16.342124   28795 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:17.312553   28795 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:17.312608   28795 retry.go:31] will retry after 250.352864ms: dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:20.644595   28795 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:20.644670   28795 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:20.644685   28795 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:20.644693   28795 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:20.644710   28795 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:20.644718   28795 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:20.645023   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.645066   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.660171   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44637
	I0725 17:52:20.660613   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.661112   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.661140   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.661467   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.661657   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:20.663214   28795 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:20.663234   28795 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:20.663653   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.663700   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.679063   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0725 17:52:20.679596   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.680222   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.680249   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.680600   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.680806   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:20.684200   28795 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:20.684766   28795 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:20.684794   28795 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:20.684985   28795 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:20.685410   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.685457   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.702081   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0725 17:52:20.702472   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.702982   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.703003   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.703347   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.703564   28795 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:20.703758   28795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:20.703784   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:20.707165   28795 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:20.707756   28795 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:20.707785   28795 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:20.708006   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:20.708229   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:20.708397   28795 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:20.708629   28795 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:20.787463   28795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:20.802763   28795 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:20.802788   28795 api_server.go:166] Checking apiserver status ...
	I0725 17:52:20.802820   28795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:20.817668   28795 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:20.828032   28795 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:20.828078   28795 ssh_runner.go:195] Run: ls
	I0725 17:52:20.832337   28795 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:20.842787   28795 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:20.842814   28795 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:20.842824   28795 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:20.842842   28795 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:20.843188   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.843238   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.857904   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0725 17:52:20.858355   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.858846   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.858871   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.859168   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.859374   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:20.861242   28795 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:20.861258   28795 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:20.861651   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.861692   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.875994   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I0725 17:52:20.876398   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.876807   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.876826   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.877149   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.877340   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:20.880222   28795 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:20.880694   28795 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:20.880720   28795 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:20.880846   28795 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:20.881116   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:20.881149   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:20.895078   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0725 17:52:20.895439   28795 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:20.895835   28795 main.go:141] libmachine: Using API Version  1
	I0725 17:52:20.895856   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:20.896142   28795 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:20.896407   28795 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:20.896633   28795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:20.896656   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:20.899693   28795 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:20.900140   28795 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:20.900166   28795 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:20.900281   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:20.900447   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:20.900597   28795 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:20.900717   28795 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:20.979687   28795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:20.993291   28795 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (4.967546944s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:22.548609   28896 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:22.548709   28896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:22.548719   28896 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:22.548726   28896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:22.548939   28896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:22.549127   28896 out.go:298] Setting JSON to false
	I0725 17:52:22.549154   28896 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:22.549252   28896 notify.go:220] Checking for updates...
	I0725 17:52:22.549602   28896 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:22.549618   28896 status.go:255] checking status of ha-174036 ...
	I0725 17:52:22.550006   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.550078   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.568822   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0725 17:52:22.569247   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.569802   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.569827   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.570193   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.570401   28896 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:22.571939   28896 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:22.571956   28896 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:22.572315   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.572371   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.586595   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0725 17:52:22.587035   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.587530   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.587554   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.587877   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.588036   28896 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:22.590884   28896 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:22.591346   28896 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:22.591379   28896 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:22.591466   28896 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:22.591747   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.591796   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.606666   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I0725 17:52:22.607010   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.607434   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.607449   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.607794   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.607950   28896 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:22.608132   28896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:22.608153   28896 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:22.610785   28896 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:22.611252   28896 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:22.611278   28896 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:22.611373   28896 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:22.611546   28896 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:22.611792   28896 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:22.611904   28896 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:22.691526   28896 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:22.697851   28896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:22.712816   28896 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:22.712843   28896 api_server.go:166] Checking apiserver status ...
	I0725 17:52:22.712895   28896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:22.727752   28896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:22.738153   28896 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:22.738212   28896 ssh_runner.go:195] Run: ls
	I0725 17:52:22.742440   28896 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:22.746570   28896 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:22.746590   28896 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:22.746599   28896 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:22.746614   28896 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:22.746926   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.746966   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.762305   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0725 17:52:22.762694   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.763151   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.763166   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.763425   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.763619   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:22.765078   28896 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:22.765094   28896 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:22.765483   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.765522   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.779860   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0725 17:52:22.780371   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.780875   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.780893   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.781164   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.781350   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:22.784232   28896 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:22.784720   28896 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:22.784742   28896 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:22.784905   28896 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:22.785195   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:22.785232   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:22.800027   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0725 17:52:22.800456   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:22.800896   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:22.800920   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:22.801224   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:22.801393   28896 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:22.801581   28896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:22.801606   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:22.804004   28896 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:22.804489   28896 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:22.804506   28896 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:22.804686   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:22.804915   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:22.805060   28896 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:22.805177   28896 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:23.712503   28896 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:23.712561   28896 retry.go:31] will retry after 357.985173ms: dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:27.136584   28896 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:27.136681   28896 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:27.136703   28896 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:27.136716   28896 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:27.136754   28896 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:27.136777   28896 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:27.137079   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.137120   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.151623   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0725 17:52:27.152172   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.152705   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.152730   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.153129   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.153395   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:27.155220   28896 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:27.155234   28896 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:27.155645   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.155686   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.170798   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0725 17:52:27.171246   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.171852   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.171876   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.172277   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.172526   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:27.175731   28896 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:27.176230   28896 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:27.176253   28896 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:27.176388   28896 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:27.176781   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.176818   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.192383   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0725 17:52:27.192786   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.193315   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.193342   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.193677   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.193878   28896 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:27.194134   28896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:27.194173   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:27.197076   28896 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:27.197526   28896 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:27.197562   28896 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:27.197721   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:27.197912   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:27.198052   28896 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:27.198220   28896 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:27.275609   28896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:27.290067   28896 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:27.290094   28896 api_server.go:166] Checking apiserver status ...
	I0725 17:52:27.290127   28896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:27.303934   28896 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:27.312723   28896 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:27.312770   28896 ssh_runner.go:195] Run: ls
	I0725 17:52:27.316990   28896 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:27.321042   28896 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:27.321071   28896 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:27.321079   28896 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:27.321092   28896 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:27.321383   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.321413   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.336197   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33871
	I0725 17:52:27.336651   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.337092   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.337112   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.337373   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.337536   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:27.339465   28896 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:27.339480   28896 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:27.339757   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.339796   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.354727   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0725 17:52:27.355169   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.355713   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.355736   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.356114   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.356289   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:27.359301   28896 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:27.359744   28896 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:27.359777   28896 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:27.359925   28896 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:27.360231   28896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:27.360274   28896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:27.375117   28896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33217
	I0725 17:52:27.375564   28896 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:27.376090   28896 main.go:141] libmachine: Using API Version  1
	I0725 17:52:27.376112   28896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:27.376453   28896 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:27.376666   28896 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:27.376917   28896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:27.376941   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:27.379697   28896 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:27.380229   28896 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:27.380254   28896 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:27.380509   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:27.380685   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:27.380823   28896 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:27.380941   28896 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:27.459414   28896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:27.473793   28896 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (3.717271919s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:31.669040   29013 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:31.669169   29013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:31.669180   29013 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:31.669188   29013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:31.669374   29013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:31.669600   29013 out.go:298] Setting JSON to false
	I0725 17:52:31.669635   29013 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:31.669743   29013 notify.go:220] Checking for updates...
	I0725 17:52:31.670110   29013 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:31.670129   29013 status.go:255] checking status of ha-174036 ...
	I0725 17:52:31.670580   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.670634   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.688119   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38439
	I0725 17:52:31.688719   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.689427   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.689458   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.689817   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.690012   29013 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:31.691666   29013 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:31.691684   29013 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:31.691965   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.692015   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.707561   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46831
	I0725 17:52:31.708143   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.708923   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.708953   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.709300   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.709489   29013 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:31.712877   29013 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:31.713399   29013 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:31.713423   29013 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:31.713581   29013 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:31.713888   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.713928   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.729306   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I0725 17:52:31.729654   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.730086   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.730107   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.730446   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.730656   29013 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:31.730874   29013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:31.730896   29013 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:31.733537   29013 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:31.734006   29013 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:31.734031   29013 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:31.734175   29013 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:31.734376   29013 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:31.734607   29013 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:31.734811   29013 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:31.819683   29013 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:31.825123   29013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:31.839012   29013 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:31.839039   29013 api_server.go:166] Checking apiserver status ...
	I0725 17:52:31.839078   29013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:31.852897   29013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:31.867346   29013 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:31.867405   29013 ssh_runner.go:195] Run: ls
	I0725 17:52:31.871377   29013 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:31.875320   29013 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:31.875339   29013 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:31.875348   29013 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:31.875363   29013 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:31.875657   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.875703   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.890423   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0725 17:52:31.890804   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.891245   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.891265   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.891623   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.891798   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:31.893451   29013 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:31.893469   29013 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:31.893746   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.893783   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.908650   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I0725 17:52:31.909114   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.909645   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.909672   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.910052   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.910249   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:31.913284   29013 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:31.913722   29013 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:31.913750   29013 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:31.913964   29013 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:31.914357   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:31.914408   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:31.929502   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
	I0725 17:52:31.929978   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:31.930460   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:31.930487   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:31.930895   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:31.931091   29013 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:31.931305   29013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:31.931345   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:31.934009   29013 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:31.934477   29013 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:31.934511   29013 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:31.934599   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:31.934794   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:31.934938   29013 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:31.935077   29013 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:35.008571   29013 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:35.008656   29013 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:35.008672   29013 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:35.008682   29013 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:35.008705   29013 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:35.008732   29013 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:35.009078   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.009123   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.023770   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44041
	I0725 17:52:35.024166   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.024618   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.024640   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.024920   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.025098   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:35.026992   29013 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:35.027021   29013 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:35.027302   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.027333   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.041542   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0725 17:52:35.041982   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.042590   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.042610   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.042904   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.043107   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:35.046057   29013 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:35.046559   29013 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:35.046584   29013 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:35.046773   29013 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:35.047055   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.047086   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.062077   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0725 17:52:35.062508   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.062961   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.062980   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.063324   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.063500   29013 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:35.063703   29013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:35.063729   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:35.066426   29013 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:35.066846   29013 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:35.066871   29013 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:35.067004   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:35.067131   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:35.067431   29013 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:35.067574   29013 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:35.143242   29013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:35.157991   29013 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:35.158022   29013 api_server.go:166] Checking apiserver status ...
	I0725 17:52:35.158063   29013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:35.171290   29013 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:35.180162   29013 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:35.180225   29013 ssh_runner.go:195] Run: ls
	I0725 17:52:35.184254   29013 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:35.188619   29013 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:35.188642   29013 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:35.188653   29013 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:35.188672   29013 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:35.188941   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.188975   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.203535   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0725 17:52:35.203929   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.204403   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.204427   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.204707   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.204931   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:35.206537   29013 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:35.206555   29013 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:35.206863   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.206899   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.221133   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0725 17:52:35.221558   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.221974   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.221993   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.222386   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.222591   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:35.225553   29013 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:35.226015   29013 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:35.226043   29013 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:35.226204   29013 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:35.226499   29013 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:35.226532   29013 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:35.241153   29013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0725 17:52:35.241544   29013 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:35.242050   29013 main.go:141] libmachine: Using API Version  1
	I0725 17:52:35.242068   29013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:35.242359   29013 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:35.242575   29013 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:35.242797   29013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:35.242823   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:35.245810   29013 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:35.246269   29013 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:35.246306   29013 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:35.246436   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:35.246621   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:35.246769   29013 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:35.246905   29013 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:35.327125   29013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:35.342697   29013 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (3.72043104s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:38.015699   29113 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:38.015979   29113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:38.015990   29113 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:38.015994   29113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:38.016185   29113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:38.016411   29113 out.go:298] Setting JSON to false
	I0725 17:52:38.016441   29113 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:38.016490   29113 notify.go:220] Checking for updates...
	I0725 17:52:38.016963   29113 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:38.016984   29113 status.go:255] checking status of ha-174036 ...
	I0725 17:52:38.017469   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.017515   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.036443   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37485
	I0725 17:52:38.036881   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.037491   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.037522   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.037874   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.038074   29113 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:38.039900   29113 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:38.039915   29113 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:38.040206   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.040245   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.056789   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0725 17:52:38.057146   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.057578   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.057597   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.057946   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.058136   29113 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:38.060970   29113 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:38.061401   29113 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:38.061423   29113 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:38.061522   29113 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:38.061906   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.061945   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.076556   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0725 17:52:38.076939   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.077459   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.077569   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.077922   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.078126   29113 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:38.078314   29113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:38.078332   29113 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:38.081012   29113 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:38.081394   29113 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:38.081422   29113 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:38.081525   29113 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:38.081695   29113 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:38.081858   29113 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:38.081993   29113 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:38.160868   29113 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:38.166637   29113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:38.180786   29113 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:38.180814   29113 api_server.go:166] Checking apiserver status ...
	I0725 17:52:38.180855   29113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:38.193467   29113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:38.202193   29113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:38.202282   29113 ssh_runner.go:195] Run: ls
	I0725 17:52:38.207084   29113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:38.212951   29113 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:38.212975   29113 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:38.212987   29113 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:38.213017   29113 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:38.213344   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.213375   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.227986   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
	I0725 17:52:38.228462   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.228998   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.229017   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.229309   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.229481   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:38.231133   29113 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 17:52:38.231152   29113 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:38.231472   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.231504   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.245609   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0725 17:52:38.245982   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.246510   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.246529   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.246890   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.247090   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:52:38.249752   29113 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:38.250153   29113 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:38.250180   29113 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:38.250308   29113 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 17:52:38.250602   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:38.250633   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:38.264988   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0725 17:52:38.265388   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:38.265831   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:38.265850   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:38.266146   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:38.266363   29113 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:52:38.266596   29113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:38.266622   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:52:38.269484   29113 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:38.269891   29113 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:52:38.269914   29113 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:52:38.270020   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:52:38.270208   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:52:38.270364   29113 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:52:38.270513   29113 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	W0725 17:52:41.344582   29113 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.197:22: connect: no route to host
	W0725 17:52:41.344680   29113 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	E0725 17:52:41.344703   29113 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:41.344716   29113 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 17:52:41.344739   29113 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.197:22: connect: no route to host
	I0725 17:52:41.344750   29113 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:41.345153   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.345198   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.361716   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35139
	I0725 17:52:41.362319   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.362817   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.362837   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.363214   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.363399   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:41.365915   29113 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:41.365931   29113 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:41.366277   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.366325   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.381072   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
	I0725 17:52:41.381488   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.381988   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.382015   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.382355   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.382575   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:41.385470   29113 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:41.385961   29113 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:41.385988   29113 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:41.386155   29113 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:41.386452   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.386487   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.402398   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34729
	I0725 17:52:41.402914   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.403470   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.403497   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.403821   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.404041   29113 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:41.404220   29113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:41.404242   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:41.407521   29113 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:41.408057   29113 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:41.408075   29113 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:41.408240   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:41.408497   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:41.408680   29113 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:41.408812   29113 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:41.487477   29113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:41.501401   29113 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:41.501428   29113 api_server.go:166] Checking apiserver status ...
	I0725 17:52:41.501461   29113 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:41.514486   29113 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:41.524387   29113 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:41.524442   29113 ssh_runner.go:195] Run: ls
	I0725 17:52:41.529415   29113 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:41.535216   29113 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:41.535239   29113 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:41.535250   29113 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:41.535271   29113 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:41.535582   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.535614   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.551018   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0725 17:52:41.551423   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.551907   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.551931   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.552223   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.552415   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:41.553988   29113 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:41.554008   29113 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:41.554319   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.554353   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.568935   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0725 17:52:41.569378   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.569842   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.569864   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.570154   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.570446   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:41.573264   29113 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:41.573726   29113 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:41.573756   29113 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:41.573912   29113 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:41.574309   29113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:41.574350   29113 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:41.588659   29113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I0725 17:52:41.589033   29113 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:41.589482   29113 main.go:141] libmachine: Using API Version  1
	I0725 17:52:41.589501   29113 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:41.589825   29113 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:41.589983   29113 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:41.590171   29113 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:41.590191   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:41.593176   29113 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:41.593618   29113 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:41.593654   29113 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:41.593800   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:41.593970   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:41.594104   29113 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:41.594242   29113 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:41.679130   29113 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:41.693253   29113 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 7 (597.197499ms)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174036-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:51.926213   29274 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:51.926317   29274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:51.926329   29274 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:51.926335   29274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:51.926528   29274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:51.926746   29274 out.go:298] Setting JSON to false
	I0725 17:52:51.926773   29274 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:51.926902   29274 notify.go:220] Checking for updates...
	I0725 17:52:51.927235   29274 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:51.927253   29274 status.go:255] checking status of ha-174036 ...
	I0725 17:52:51.927655   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:51.927717   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:51.946549   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44809
	I0725 17:52:51.947005   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:51.947579   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:51.947609   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:51.947999   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:51.948224   29274 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:52:51.950047   29274 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 17:52:51.950063   29274 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:51.950351   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:51.950382   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:51.965789   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0725 17:52:51.966137   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:51.966616   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:51.966644   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:51.966944   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:51.967142   29274 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:52:51.969970   29274 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:51.970509   29274 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:51.970546   29274 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:51.970677   29274 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:52:51.970987   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:51.971031   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:51.985484   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0725 17:52:51.985835   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:51.986252   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:51.986269   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:51.986582   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:51.986755   29274 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:52:51.986927   29274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:51.986960   29274 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:52:51.989594   29274 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:51.990012   29274 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:52:51.990044   29274 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:52:51.990180   29274 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:52:51.990337   29274 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:52:51.990478   29274 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:52:51.990605   29274 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:52:52.071877   29274 ssh_runner.go:195] Run: systemctl --version
	I0725 17:52:52.078797   29274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:52.093009   29274 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:52.093034   29274 api_server.go:166] Checking apiserver status ...
	I0725 17:52:52.093067   29274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:52.107048   29274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	W0725 17:52:52.116314   29274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:52.116376   29274 ssh_runner.go:195] Run: ls
	I0725 17:52:52.120582   29274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:52.126644   29274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:52.126672   29274 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 17:52:52.126685   29274 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:52.126706   29274 status.go:255] checking status of ha-174036-m02 ...
	I0725 17:52:52.127108   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.127174   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.141780   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I0725 17:52:52.142229   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.142714   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.142739   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.143045   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.143243   29274 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:52:52.144851   29274 status.go:330] ha-174036-m02 host status = "Stopped" (err=<nil>)
	I0725 17:52:52.144866   29274 status.go:343] host is not running, skipping remaining checks
	I0725 17:52:52.144871   29274 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:52.144887   29274 status.go:255] checking status of ha-174036-m03 ...
	I0725 17:52:52.145176   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.145212   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.159877   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37613
	I0725 17:52:52.160344   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.160803   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.160824   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.161131   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.161298   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:52.162752   29274 status.go:330] ha-174036-m03 host status = "Running" (err=<nil>)
	I0725 17:52:52.162765   29274 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:52.163090   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.163135   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.178762   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44645
	I0725 17:52:52.179157   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.179586   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.179601   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.179881   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.180104   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:52:52.182859   29274 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:52.183365   29274 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:52.183403   29274 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:52.183573   29274 host.go:66] Checking if "ha-174036-m03" exists ...
	I0725 17:52:52.183842   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.183873   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.198008   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0725 17:52:52.198440   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.198999   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.199019   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.199359   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.199527   29274 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:52.199713   29274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:52.199732   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:52.202465   29274 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:52.202937   29274 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:52.202969   29274 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:52.203123   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:52.203293   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:52.203432   29274 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:52.203608   29274 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:52.283337   29274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:52.299712   29274 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 17:52:52.299743   29274 api_server.go:166] Checking apiserver status ...
	I0725 17:52:52.299799   29274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:52:52.313648   29274 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup
	W0725 17:52:52.322513   29274 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1558/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 17:52:52.322560   29274 ssh_runner.go:195] Run: ls
	I0725 17:52:52.326226   29274 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 17:52:52.330397   29274 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 17:52:52.330420   29274 status.go:422] ha-174036-m03 apiserver status = Running (err=<nil>)
	I0725 17:52:52.330430   29274 status.go:257] ha-174036-m03 status: &{Name:ha-174036-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 17:52:52.330449   29274 status.go:255] checking status of ha-174036-m04 ...
	I0725 17:52:52.330820   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.330866   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.345425   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0725 17:52:52.345832   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.346251   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.346272   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.346523   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.346634   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:52.348152   29274 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 17:52:52.348168   29274 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:52.348464   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.348497   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.362909   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0725 17:52:52.363302   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.363759   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.363776   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.364058   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.364258   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 17:52:52.367056   29274 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:52.367486   29274 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:52.367510   29274 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:52.367603   29274 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 17:52:52.367924   29274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:52.367961   29274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:52.382468   29274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
	I0725 17:52:52.382789   29274 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:52.383215   29274 main.go:141] libmachine: Using API Version  1
	I0725 17:52:52.383235   29274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:52.383526   29274 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:52.383695   29274 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:52.383835   29274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 17:52:52.383852   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:52.386072   29274 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:52.386467   29274 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:52.386493   29274 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:52.386564   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:52.386728   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:52.386855   29274 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:52.386981   29274 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:52.467159   29274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:52:52.481347   29274 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174036 -n ha-174036
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174036 logs -n 25: (1.307350284s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m03_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m04 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp testdata/cp-test.txt                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m04_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03:/home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m03 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-174036 node stop m02 -v=7                                                    | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-174036 node start m02 -v=7                                                   | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:45:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:45:00.348770   23738 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:45:00.348857   23738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:45:00.348865   23738 out.go:304] Setting ErrFile to fd 2...
	I0725 17:45:00.348869   23738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:45:00.349027   23738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:45:00.349539   23738 out.go:298] Setting JSON to false
	I0725 17:45:00.350312   23738 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1644,"bootTime":1721927856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:45:00.350383   23738 start.go:139] virtualization: kvm guest
	I0725 17:45:00.352577   23738 out.go:177] * [ha-174036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:45:00.353919   23738 notify.go:220] Checking for updates...
	I0725 17:45:00.353961   23738 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:45:00.355138   23738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:45:00.356353   23738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:00.357757   23738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.358988   23738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:45:00.360117   23738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:45:00.361418   23738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:45:00.395042   23738 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 17:45:00.396396   23738 start.go:297] selected driver: kvm2
	I0725 17:45:00.396418   23738 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:45:00.396428   23738 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:45:00.397096   23738 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:45:00.397175   23738 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:45:00.411464   23738 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:45:00.411507   23738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:45:00.411738   23738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:45:00.411765   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:00.411774   23738 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0725 17:45:00.411785   23738 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 17:45:00.411844   23738 start.go:340] cluster config:
	{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0725 17:45:00.411984   23738 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:45:00.413645   23738 out.go:177] * Starting "ha-174036" primary control-plane node in "ha-174036" cluster
	I0725 17:45:00.414740   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:00.414773   23738 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:45:00.414785   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:45:00.414853   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:45:00.414865   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:45:00.415171   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:00.415193   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json: {Name:mk2194c9dd658db00a21b20213f9200952dd6688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:00.415337   23738 start.go:360] acquireMachinesLock for ha-174036: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:45:00.415370   23738 start.go:364] duration metric: took 17.988µs to acquireMachinesLock for "ha-174036"
	I0725 17:45:00.415384   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:00.415465   23738 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 17:45:00.416982   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:45:00.417113   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:00.417156   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:00.430633   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I0725 17:45:00.431025   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:00.431524   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:00.431546   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:00.431886   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:00.432088   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:00.432255   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:00.432479   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:45:00.432513   23738 client.go:168] LocalClient.Create starting
	I0725 17:45:00.432565   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:45:00.432604   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:00.432622   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:00.432688   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:45:00.432708   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:00.432724   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:00.432741   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:45:00.432751   23738 main.go:141] libmachine: (ha-174036) Calling .PreCreateCheck
	I0725 17:45:00.433073   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:00.433475   23738 main.go:141] libmachine: Creating machine...
	I0725 17:45:00.433490   23738 main.go:141] libmachine: (ha-174036) Calling .Create
	I0725 17:45:00.433633   23738 main.go:141] libmachine: (ha-174036) Creating KVM machine...
	I0725 17:45:00.434996   23738 main.go:141] libmachine: (ha-174036) DBG | found existing default KVM network
	I0725 17:45:00.435642   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.435516   23761 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0725 17:45:00.435673   23738 main.go:141] libmachine: (ha-174036) DBG | created network xml: 
	I0725 17:45:00.435690   23738 main.go:141] libmachine: (ha-174036) DBG | <network>
	I0725 17:45:00.435769   23738 main.go:141] libmachine: (ha-174036) DBG |   <name>mk-ha-174036</name>
	I0725 17:45:00.435794   23738 main.go:141] libmachine: (ha-174036) DBG |   <dns enable='no'/>
	I0725 17:45:00.435807   23738 main.go:141] libmachine: (ha-174036) DBG |   
	I0725 17:45:00.435819   23738 main.go:141] libmachine: (ha-174036) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 17:45:00.435828   23738 main.go:141] libmachine: (ha-174036) DBG |     <dhcp>
	I0725 17:45:00.435837   23738 main.go:141] libmachine: (ha-174036) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 17:45:00.435843   23738 main.go:141] libmachine: (ha-174036) DBG |     </dhcp>
	I0725 17:45:00.435851   23738 main.go:141] libmachine: (ha-174036) DBG |   </ip>
	I0725 17:45:00.435864   23738 main.go:141] libmachine: (ha-174036) DBG |   
	I0725 17:45:00.435875   23738 main.go:141] libmachine: (ha-174036) DBG | </network>
	I0725 17:45:00.435895   23738 main.go:141] libmachine: (ha-174036) DBG | 
	I0725 17:45:00.441387   23738 main.go:141] libmachine: (ha-174036) DBG | trying to create private KVM network mk-ha-174036 192.168.39.0/24...
	I0725 17:45:00.505314   23738 main.go:141] libmachine: (ha-174036) DBG | private KVM network mk-ha-174036 192.168.39.0/24 created
	I0725 17:45:00.505386   23738 main.go:141] libmachine: (ha-174036) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 ...
	I0725 17:45:00.505412   23738 main.go:141] libmachine: (ha-174036) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:45:00.505455   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.505308   23761 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.505510   23738 main.go:141] libmachine: (ha-174036) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:45:00.744739   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.744575   23761 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa...
	I0725 17:45:00.989987   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.989829   23761 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/ha-174036.rawdisk...
	I0725 17:45:00.990015   23738 main.go:141] libmachine: (ha-174036) DBG | Writing magic tar header
	I0725 17:45:00.990030   23738 main.go:141] libmachine: (ha-174036) DBG | Writing SSH key tar header
	I0725 17:45:00.990043   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:00.989944   23761 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 ...
	I0725 17:45:00.990057   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036
	I0725 17:45:00.990083   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036 (perms=drwx------)
	I0725 17:45:00.990091   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:45:00.990101   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:00.990107   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:45:00.990114   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:45:00.990130   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:45:00.990141   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:45:00.990225   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:45:00.990277   23738 main.go:141] libmachine: (ha-174036) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:45:00.990286   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:45:00.990319   23738 main.go:141] libmachine: (ha-174036) Creating domain...
	I0725 17:45:00.990345   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:45:00.990364   23738 main.go:141] libmachine: (ha-174036) DBG | Checking permissions on dir: /home
	I0725 17:45:00.990375   23738 main.go:141] libmachine: (ha-174036) DBG | Skipping /home - not owner
	I0725 17:45:00.991283   23738 main.go:141] libmachine: (ha-174036) define libvirt domain using xml: 
	I0725 17:45:00.991301   23738 main.go:141] libmachine: (ha-174036) <domain type='kvm'>
	I0725 17:45:00.991311   23738 main.go:141] libmachine: (ha-174036)   <name>ha-174036</name>
	I0725 17:45:00.991329   23738 main.go:141] libmachine: (ha-174036)   <memory unit='MiB'>2200</memory>
	I0725 17:45:00.991338   23738 main.go:141] libmachine: (ha-174036)   <vcpu>2</vcpu>
	I0725 17:45:00.991345   23738 main.go:141] libmachine: (ha-174036)   <features>
	I0725 17:45:00.991353   23738 main.go:141] libmachine: (ha-174036)     <acpi/>
	I0725 17:45:00.991367   23738 main.go:141] libmachine: (ha-174036)     <apic/>
	I0725 17:45:00.991375   23738 main.go:141] libmachine: (ha-174036)     <pae/>
	I0725 17:45:00.991386   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991391   23738 main.go:141] libmachine: (ha-174036)   </features>
	I0725 17:45:00.991395   23738 main.go:141] libmachine: (ha-174036)   <cpu mode='host-passthrough'>
	I0725 17:45:00.991400   23738 main.go:141] libmachine: (ha-174036)   
	I0725 17:45:00.991404   23738 main.go:141] libmachine: (ha-174036)   </cpu>
	I0725 17:45:00.991409   23738 main.go:141] libmachine: (ha-174036)   <os>
	I0725 17:45:00.991415   23738 main.go:141] libmachine: (ha-174036)     <type>hvm</type>
	I0725 17:45:00.991421   23738 main.go:141] libmachine: (ha-174036)     <boot dev='cdrom'/>
	I0725 17:45:00.991427   23738 main.go:141] libmachine: (ha-174036)     <boot dev='hd'/>
	I0725 17:45:00.991440   23738 main.go:141] libmachine: (ha-174036)     <bootmenu enable='no'/>
	I0725 17:45:00.991454   23738 main.go:141] libmachine: (ha-174036)   </os>
	I0725 17:45:00.991479   23738 main.go:141] libmachine: (ha-174036)   <devices>
	I0725 17:45:00.991498   23738 main.go:141] libmachine: (ha-174036)     <disk type='file' device='cdrom'>
	I0725 17:45:00.991512   23738 main.go:141] libmachine: (ha-174036)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/boot2docker.iso'/>
	I0725 17:45:00.991521   23738 main.go:141] libmachine: (ha-174036)       <target dev='hdc' bus='scsi'/>
	I0725 17:45:00.991531   23738 main.go:141] libmachine: (ha-174036)       <readonly/>
	I0725 17:45:00.991537   23738 main.go:141] libmachine: (ha-174036)     </disk>
	I0725 17:45:00.991556   23738 main.go:141] libmachine: (ha-174036)     <disk type='file' device='disk'>
	I0725 17:45:00.991570   23738 main.go:141] libmachine: (ha-174036)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:45:00.991583   23738 main.go:141] libmachine: (ha-174036)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/ha-174036.rawdisk'/>
	I0725 17:45:00.991588   23738 main.go:141] libmachine: (ha-174036)       <target dev='hda' bus='virtio'/>
	I0725 17:45:00.991593   23738 main.go:141] libmachine: (ha-174036)     </disk>
	I0725 17:45:00.991597   23738 main.go:141] libmachine: (ha-174036)     <interface type='network'>
	I0725 17:45:00.991602   23738 main.go:141] libmachine: (ha-174036)       <source network='mk-ha-174036'/>
	I0725 17:45:00.991606   23738 main.go:141] libmachine: (ha-174036)       <model type='virtio'/>
	I0725 17:45:00.991611   23738 main.go:141] libmachine: (ha-174036)     </interface>
	I0725 17:45:00.991615   23738 main.go:141] libmachine: (ha-174036)     <interface type='network'>
	I0725 17:45:00.991620   23738 main.go:141] libmachine: (ha-174036)       <source network='default'/>
	I0725 17:45:00.991624   23738 main.go:141] libmachine: (ha-174036)       <model type='virtio'/>
	I0725 17:45:00.991651   23738 main.go:141] libmachine: (ha-174036)     </interface>
	I0725 17:45:00.991667   23738 main.go:141] libmachine: (ha-174036)     <serial type='pty'>
	I0725 17:45:00.991674   23738 main.go:141] libmachine: (ha-174036)       <target port='0'/>
	I0725 17:45:00.991678   23738 main.go:141] libmachine: (ha-174036)     </serial>
	I0725 17:45:00.991683   23738 main.go:141] libmachine: (ha-174036)     <console type='pty'>
	I0725 17:45:00.991687   23738 main.go:141] libmachine: (ha-174036)       <target type='serial' port='0'/>
	I0725 17:45:00.991695   23738 main.go:141] libmachine: (ha-174036)     </console>
	I0725 17:45:00.991699   23738 main.go:141] libmachine: (ha-174036)     <rng model='virtio'>
	I0725 17:45:00.991704   23738 main.go:141] libmachine: (ha-174036)       <backend model='random'>/dev/random</backend>
	I0725 17:45:00.991708   23738 main.go:141] libmachine: (ha-174036)     </rng>
	I0725 17:45:00.991712   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991716   23738 main.go:141] libmachine: (ha-174036)     
	I0725 17:45:00.991721   23738 main.go:141] libmachine: (ha-174036)   </devices>
	I0725 17:45:00.991724   23738 main.go:141] libmachine: (ha-174036) </domain>
	I0725 17:45:00.991730   23738 main.go:141] libmachine: (ha-174036) 
	I0725 17:45:00.996216   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:49:b0:79 in network default
	I0725 17:45:00.996792   23738 main.go:141] libmachine: (ha-174036) Ensuring networks are active...
	I0725 17:45:00.996808   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:00.997409   23738 main.go:141] libmachine: (ha-174036) Ensuring network default is active
	I0725 17:45:00.997709   23738 main.go:141] libmachine: (ha-174036) Ensuring network mk-ha-174036 is active
	I0725 17:45:00.998094   23738 main.go:141] libmachine: (ha-174036) Getting domain xml...
	I0725 17:45:00.998683   23738 main.go:141] libmachine: (ha-174036) Creating domain...
	I0725 17:45:02.172283   23738 main.go:141] libmachine: (ha-174036) Waiting to get IP...
	I0725 17:45:02.172950   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.173296   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.173335   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.173277   23761 retry.go:31] will retry after 205.432801ms: waiting for machine to come up
	I0725 17:45:02.380899   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.381266   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.381313   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.381235   23761 retry.go:31] will retry after 287.651092ms: waiting for machine to come up
	I0725 17:45:02.670750   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:02.671046   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:02.671072   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:02.671001   23761 retry.go:31] will retry after 381.489127ms: waiting for machine to come up
	I0725 17:45:03.054449   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:03.054925   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:03.054951   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:03.054890   23761 retry.go:31] will retry after 590.979983ms: waiting for machine to come up
	I0725 17:45:03.647535   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:03.647896   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:03.647924   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:03.647815   23761 retry.go:31] will retry after 502.305492ms: waiting for machine to come up
	I0725 17:45:04.151385   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:04.151760   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:04.151788   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:04.151714   23761 retry.go:31] will retry after 653.566358ms: waiting for machine to come up
	I0725 17:45:04.806401   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:04.806814   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:04.806857   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:04.806780   23761 retry.go:31] will retry after 1.160094808s: waiting for machine to come up
	I0725 17:45:05.968613   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:05.969103   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:05.969127   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:05.969060   23761 retry.go:31] will retry after 1.254291954s: waiting for machine to come up
	I0725 17:45:07.225610   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:07.226094   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:07.226122   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:07.226028   23761 retry.go:31] will retry after 1.803882415s: waiting for machine to come up
	I0725 17:45:09.031955   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:09.032498   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:09.032525   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:09.032453   23761 retry.go:31] will retry after 1.590991223s: waiting for machine to come up
	I0725 17:45:10.625217   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:10.625590   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:10.625616   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:10.625545   23761 retry.go:31] will retry after 2.115148623s: waiting for machine to come up
	I0725 17:45:12.743735   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:12.744200   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:12.744227   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:12.744144   23761 retry.go:31] will retry after 2.279680866s: waiting for machine to come up
	I0725 17:45:15.026530   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:15.026947   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:15.026989   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:15.026903   23761 retry.go:31] will retry after 3.465368523s: waiting for machine to come up
	I0725 17:45:18.496008   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:18.496393   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find current IP address of domain ha-174036 in network mk-ha-174036
	I0725 17:45:18.496420   23738 main.go:141] libmachine: (ha-174036) DBG | I0725 17:45:18.496292   23761 retry.go:31] will retry after 3.691118212s: waiting for machine to come up
	I0725 17:45:22.190099   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.190574   23738 main.go:141] libmachine: (ha-174036) Found IP for machine: 192.168.39.165
	I0725 17:45:22.190589   23738 main.go:141] libmachine: (ha-174036) Reserving static IP address...
	I0725 17:45:22.190598   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has current primary IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.191024   23738 main.go:141] libmachine: (ha-174036) DBG | unable to find host DHCP lease matching {name: "ha-174036", mac: "52:54:00:0f:45:3b", ip: "192.168.39.165"} in network mk-ha-174036
	I0725 17:45:22.259473   23738 main.go:141] libmachine: (ha-174036) DBG | Getting to WaitForSSH function...
	I0725 17:45:22.259505   23738 main.go:141] libmachine: (ha-174036) Reserved static IP address: 192.168.39.165
	I0725 17:45:22.259518   23738 main.go:141] libmachine: (ha-174036) Waiting for SSH to be available...
	I0725 17:45:22.261986   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.262346   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.262375   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.262565   23738 main.go:141] libmachine: (ha-174036) DBG | Using SSH client type: external
	I0725 17:45:22.262594   23738 main.go:141] libmachine: (ha-174036) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa (-rw-------)
	I0725 17:45:22.262633   23738 main.go:141] libmachine: (ha-174036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:45:22.262646   23738 main.go:141] libmachine: (ha-174036) DBG | About to run SSH command:
	I0725 17:45:22.262661   23738 main.go:141] libmachine: (ha-174036) DBG | exit 0
	I0725 17:45:22.383973   23738 main.go:141] libmachine: (ha-174036) DBG | SSH cmd err, output: <nil>: 
	I0725 17:45:22.384244   23738 main.go:141] libmachine: (ha-174036) KVM machine creation complete!
	I0725 17:45:22.384527   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:22.385028   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:22.385267   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:22.385461   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:45:22.385474   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:22.386912   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:45:22.386924   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:45:22.386929   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:45:22.386934   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.388972   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.389264   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.389288   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.389458   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.389627   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.389755   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.389887   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.390016   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.390209   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.390222   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:45:22.491704   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:45:22.491727   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:45:22.491735   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.494256   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.494534   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.494556   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.494686   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.494849   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.494975   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.495087   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.495251   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.495415   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.495425   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:45:22.600486   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:45:22.600570   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:45:22.600586   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:45:22.600598   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.600843   23738 buildroot.go:166] provisioning hostname "ha-174036"
	I0725 17:45:22.600879   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.601051   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.603640   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.603972   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.603992   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.604140   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.604336   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.604496   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.604743   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.604937   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.605114   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.605129   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036 && echo "ha-174036" | sudo tee /etc/hostname
	I0725 17:45:22.721381   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:45:22.721406   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.724161   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.724578   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.724605   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.724750   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:22.724962   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.725113   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:22.725265   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:22.725429   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:22.725602   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:22.725617   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:45:22.836494   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:45:22.836528   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:45:22.836559   23738 buildroot.go:174] setting up certificates
	I0725 17:45:22.836568   23738 provision.go:84] configureAuth start
	I0725 17:45:22.836577   23738 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:45:22.836867   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:22.839498   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.839816   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.839838   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.839991   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:22.842187   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.842512   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:22.842531   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:22.842662   23738 provision.go:143] copyHostCerts
	I0725 17:45:22.842686   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:45:22.842718   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:45:22.842729   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:45:22.842813   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:45:22.842919   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:45:22.842951   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:45:22.842960   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:45:22.842999   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:45:22.843069   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:45:22.843092   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:45:22.843101   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:45:22.843141   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:45:22.843217   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036 san=[127.0.0.1 192.168.39.165 ha-174036 localhost minikube]
	I0725 17:45:23.378310   23738 provision.go:177] copyRemoteCerts
	I0725 17:45:23.378376   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:45:23.378398   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.381252   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.381659   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.381689   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.381866   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.382088   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.382221   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.382367   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:23.461843   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:45:23.461909   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:45:23.484737   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:45:23.484824   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0725 17:45:23.506454   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:45:23.506536   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0725 17:45:23.527417   23738 provision.go:87] duration metric: took 690.838248ms to configureAuth
	I0725 17:45:23.527441   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:45:23.527603   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:23.527680   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.530399   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.530720   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.530744   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.530854   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.531033   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.531219   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.531359   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.531495   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:23.531681   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:23.531702   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:45:23.785163   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:45:23.785187   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:45:23.785195   23738 main.go:141] libmachine: (ha-174036) Calling .GetURL
	I0725 17:45:23.786562   23738 main.go:141] libmachine: (ha-174036) DBG | Using libvirt version 6000000
	I0725 17:45:23.788791   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.789097   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.789120   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.789284   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:45:23.789313   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:45:23.789326   23738 client.go:171] duration metric: took 23.356804273s to LocalClient.Create
	I0725 17:45:23.789349   23738 start.go:167] duration metric: took 23.356870648s to libmachine.API.Create "ha-174036"
	I0725 17:45:23.789356   23738 start.go:293] postStartSetup for "ha-174036" (driver="kvm2")
	I0725 17:45:23.789369   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:45:23.789386   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:23.789646   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:45:23.789668   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.791519   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.791858   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.791891   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.791993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.792167   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.792336   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.792451   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:23.873796   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:45:23.877724   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:45:23.877743   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:45:23.877800   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:45:23.877864   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:45:23.877874   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:45:23.877955   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:45:23.886561   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:45:23.909193   23738 start.go:296] duration metric: took 119.821515ms for postStartSetup
	I0725 17:45:23.909245   23738 main.go:141] libmachine: (ha-174036) Calling .GetConfigRaw
	I0725 17:45:23.909781   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:23.912923   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.913305   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.913328   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.913546   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:23.913716   23738 start.go:128] duration metric: took 23.498242386s to createHost
	I0725 17:45:23.913735   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:23.915969   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.916280   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:23.916307   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:23.916468   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:23.916635   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.916846   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:23.916993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:23.917139   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:45:23.917317   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:45:23.917331   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:45:24.024959   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929524.002730715
	
	I0725 17:45:24.024988   23738 fix.go:216] guest clock: 1721929524.002730715
	I0725 17:45:24.024996   23738 fix.go:229] Guest: 2024-07-25 17:45:24.002730715 +0000 UTC Remote: 2024-07-25 17:45:23.913726357 +0000 UTC m=+23.597775412 (delta=89.004358ms)
	I0725 17:45:24.025016   23738 fix.go:200] guest clock delta is within tolerance: 89.004358ms
	I0725 17:45:24.025020   23738 start.go:83] releasing machines lock for "ha-174036", held for 23.609644733s
	I0725 17:45:24.025041   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.025281   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:24.028425   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.028859   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.028888   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.029042   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029518   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029715   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:24.029828   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:45:24.029880   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:24.029955   23738 ssh_runner.go:195] Run: cat /version.json
	I0725 17:45:24.029965   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:24.032752   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.032824   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033140   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.033159   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033175   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:24.033184   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:24.033287   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:24.033427   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:24.033483   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:24.033581   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:24.033641   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:24.033738   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:24.033792   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:24.033881   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:24.143291   23738 ssh_runner.go:195] Run: systemctl --version
	I0725 17:45:24.149234   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:45:24.301651   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:45:24.307405   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:45:24.307462   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:45:24.322949   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:45:24.322973   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:45:24.323045   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:45:24.339777   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:45:24.353592   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:45:24.353673   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:45:24.366965   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:45:24.380148   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:45:24.496094   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:45:24.655280   23738 docker.go:233] disabling docker service ...
	I0725 17:45:24.655348   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:45:24.668516   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:45:24.680629   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:45:24.788029   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:45:24.895924   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:45:24.910408   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:45:24.927406   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:45:24.927480   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.937032   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:45:24.937128   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.946821   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.965352   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.976399   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:45:24.987018   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:24.996636   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:25.012084   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:45:25.021555   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:45:25.030114   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:45:25.030161   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:45:25.041519   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:45:25.050245   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:45:25.156592   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:45:25.283870   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:45:25.283944   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:45:25.288595   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:45:25.288644   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:45:25.291945   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:45:25.328932   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:45:25.329017   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:45:25.355748   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:45:25.382590   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:45:25.383661   23738 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:45:25.386560   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:25.387040   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:25.387061   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:25.387309   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:45:25.390885   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:45:25.402213   23738 kubeadm.go:883] updating cluster {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:45:25.402319   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:25.402376   23738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:45:25.430493   23738 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 17:45:25.430560   23738 ssh_runner.go:195] Run: which lz4
	I0725 17:45:25.433912   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0725 17:45:25.434009   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 17:45:25.437770   23738 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 17:45:25.437801   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 17:45:26.638853   23738 crio.go:462] duration metric: took 1.20486584s to copy over tarball
	I0725 17:45:26.638922   23738 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 17:45:28.699435   23738 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.060481012s)
	I0725 17:45:28.699463   23738 crio.go:469] duration metric: took 2.060587652s to extract the tarball
	I0725 17:45:28.699472   23738 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 17:45:28.736484   23738 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:45:28.780302   23738 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:45:28.780335   23738 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:45:28.780346   23738 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0725 17:45:28.780469   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:45:28.780550   23738 ssh_runner.go:195] Run: crio config
	I0725 17:45:28.824121   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:28.824139   23738 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 17:45:28.824147   23738 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:45:28.824172   23738 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174036 NodeName:ha-174036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:45:28.824301   23738 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:45:28.824343   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:45:28.824398   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:45:28.840839   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:45:28.840978   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:45:28.841037   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:45:28.849797   23738 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:45:28.849865   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 17:45:28.858373   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0725 17:45:28.873487   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:45:28.888285   23738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0725 17:45:28.903747   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0725 17:45:28.918947   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:45:28.922518   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:45:28.933430   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:45:29.060403   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:45:29.076772   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.165
	I0725 17:45:29.076800   23738 certs.go:194] generating shared ca certs ...
	I0725 17:45:29.076821   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.076985   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:45:29.077052   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:45:29.077071   23738 certs.go:256] generating profile certs ...
	I0725 17:45:29.077134   23738 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:45:29.077151   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt with IP's: []
	I0725 17:45:29.192850   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt ...
	I0725 17:45:29.192880   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt: {Name:mkebf1ec254fc7ad5e59237cbac795cf47e3706f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.193079   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key ...
	I0725 17:45:29.193094   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key: {Name:mk41a12cac673f5052e7c617cf0b303b5f70f17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.193203   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432
	I0725 17:45:29.193221   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.254]
	I0725 17:45:29.327832   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 ...
	I0725 17:45:29.327865   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432: {Name:mkfb038ba87f0fe0746474375f2c8aa6b3f3cca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.328059   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432 ...
	I0725 17:45:29.328077   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432: {Name:mke1eb949d35e1cf45eda64ae6d4d6e75f910032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.328179   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.2d6ed432 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:45:29.328299   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.2d6ed432 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:45:29.328399   23738 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:45:29.328418   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt with IP's: []
	I0725 17:45:29.567193   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt ...
	I0725 17:45:29.567221   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt: {Name:mk147b1179eba45024fd1136e15e3d75cb08a351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.567388   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key ...
	I0725 17:45:29.567398   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key: {Name:mk5fb29b93e9d87cb88e595d391cd56d14f313ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:29.567464   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:45:29.567480   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:45:29.567490   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:45:29.567502   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:45:29.567513   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:45:29.567523   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:45:29.567535   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:45:29.567546   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:45:29.567597   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:45:29.567630   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:45:29.567639   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:45:29.567662   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:45:29.567683   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:45:29.567703   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:45:29.567737   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:45:29.567761   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.567774   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.567786   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.568301   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:45:29.592957   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:45:29.616055   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:45:29.639081   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:45:29.660472   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 17:45:29.681933   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 17:45:29.704380   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:45:29.726374   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:45:29.749140   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:45:29.770909   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:45:29.792848   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:45:29.814908   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:45:29.830920   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:45:29.836622   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:45:29.849433   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.853681   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.853730   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:45:29.861470   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:45:29.873995   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:45:29.885073   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.889976   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.890033   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:45:29.895771   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:45:29.907636   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:45:29.919890   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.925295   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.925357   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:45:29.930828   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:45:29.940716   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:45:29.944407   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:45:29.944462   23738 kubeadm.go:392] StartCluster: {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:45:29.944536   23738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:45:29.944593   23738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:45:29.982225   23738 cri.go:89] found id: ""
	I0725 17:45:29.982290   23738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 17:45:29.991416   23738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 17:45:30.000464   23738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 17:45:30.009052   23738 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 17:45:30.009069   23738 kubeadm.go:157] found existing configuration files:
	
	I0725 17:45:30.009110   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 17:45:30.017488   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 17:45:30.017623   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 17:45:30.027429   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 17:45:30.036309   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 17:45:30.036434   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 17:45:30.045244   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 17:45:30.053578   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 17:45:30.053629   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 17:45:30.062119   23738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 17:45:30.069972   23738 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 17:45:30.070019   23738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 17:45:30.078925   23738 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 17:45:30.298077   23738 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 17:45:41.465242   23738 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 17:45:41.465293   23738 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 17:45:41.465379   23738 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 17:45:41.465488   23738 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 17:45:41.465581   23738 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 17:45:41.465658   23738 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 17:45:41.467196   23738 out.go:204]   - Generating certificates and keys ...
	I0725 17:45:41.467267   23738 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 17:45:41.467419   23738 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 17:45:41.467497   23738 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 17:45:41.467571   23738 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 17:45:41.467657   23738 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 17:45:41.467725   23738 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 17:45:41.467800   23738 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 17:45:41.467915   23738 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174036 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0725 17:45:41.467989   23738 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 17:45:41.468140   23738 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174036 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0725 17:45:41.468223   23738 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 17:45:41.468278   23738 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 17:45:41.468339   23738 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 17:45:41.468390   23738 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 17:45:41.468432   23738 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 17:45:41.468480   23738 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 17:45:41.468544   23738 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 17:45:41.468611   23738 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 17:45:41.468663   23738 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 17:45:41.468753   23738 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 17:45:41.468816   23738 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 17:45:41.470362   23738 out.go:204]   - Booting up control plane ...
	I0725 17:45:41.470443   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 17:45:41.470514   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 17:45:41.470570   23738 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 17:45:41.470684   23738 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 17:45:41.470845   23738 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 17:45:41.470916   23738 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 17:45:41.471061   23738 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 17:45:41.471154   23738 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 17:45:41.471208   23738 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001128185s
	I0725 17:45:41.471326   23738 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 17:45:41.471387   23738 kubeadm.go:310] [api-check] The API server is healthy after 5.774209816s
	I0725 17:45:41.471478   23738 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 17:45:41.471597   23738 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 17:45:41.471692   23738 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 17:45:41.471859   23738 kubeadm.go:310] [mark-control-plane] Marking the node ha-174036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 17:45:41.471909   23738 kubeadm.go:310] [bootstrap-token] Using token: xq8hdz.24cgx0m1lq14udqx
	I0725 17:45:41.473116   23738 out.go:204]   - Configuring RBAC rules ...
	I0725 17:45:41.473203   23738 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 17:45:41.473332   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 17:45:41.473462   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 17:45:41.473641   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 17:45:41.473820   23738 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 17:45:41.473896   23738 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 17:45:41.474004   23738 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 17:45:41.474044   23738 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 17:45:41.474098   23738 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 17:45:41.474109   23738 kubeadm.go:310] 
	I0725 17:45:41.474191   23738 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 17:45:41.474200   23738 kubeadm.go:310] 
	I0725 17:45:41.474276   23738 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 17:45:41.474283   23738 kubeadm.go:310] 
	I0725 17:45:41.474317   23738 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 17:45:41.474373   23738 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 17:45:41.474419   23738 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 17:45:41.474428   23738 kubeadm.go:310] 
	I0725 17:45:41.474475   23738 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 17:45:41.474481   23738 kubeadm.go:310] 
	I0725 17:45:41.474523   23738 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 17:45:41.474529   23738 kubeadm.go:310] 
	I0725 17:45:41.474570   23738 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 17:45:41.474635   23738 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 17:45:41.474709   23738 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 17:45:41.474718   23738 kubeadm.go:310] 
	I0725 17:45:41.474816   23738 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 17:45:41.474914   23738 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 17:45:41.474922   23738 kubeadm.go:310] 
	I0725 17:45:41.474984   23738 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xq8hdz.24cgx0m1lq14udqx \
	I0725 17:45:41.475065   23738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 17:45:41.475086   23738 kubeadm.go:310] 	--control-plane 
	I0725 17:45:41.475092   23738 kubeadm.go:310] 
	I0725 17:45:41.475178   23738 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 17:45:41.475185   23738 kubeadm.go:310] 
	I0725 17:45:41.475270   23738 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xq8hdz.24cgx0m1lq14udqx \
	I0725 17:45:41.475402   23738 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 17:45:41.475421   23738 cni.go:84] Creating CNI manager for ""
	I0725 17:45:41.475429   23738 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 17:45:41.477532   23738 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0725 17:45:41.478593   23738 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0725 17:45:41.484967   23738 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 17:45:41.484986   23738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0725 17:45:41.505960   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 17:45:41.830998   23738 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 17:45:41.831050   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:41.831080   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036 minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=true
	I0725 17:45:41.851557   23738 ops.go:34] apiserver oom_adj: -16
	I0725 17:45:42.051947   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:42.552034   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:43.052204   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:43.552098   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:44.052678   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:44.552101   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:45.051992   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:45.552109   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:46.052037   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:46.552681   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:47.052217   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:47.552608   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:48.052118   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:48.551977   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:49.052647   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:49.552945   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:50.052583   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:50.552590   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:51.052051   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:51.552107   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:52.052883   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:52.552597   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:53.052284   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:53.552703   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:54.052355   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 17:45:54.174917   23738 kubeadm.go:1113] duration metric: took 12.343915886s to wait for elevateKubeSystemPrivileges
	I0725 17:45:54.174954   23738 kubeadm.go:394] duration metric: took 24.230496074s to StartCluster
	I0725 17:45:54.174977   23738 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:54.175040   23738 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:54.175696   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:45:54.175879   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 17:45:54.175895   23738 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 17:45:54.175871   23738 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:54.175965   23738 addons.go:69] Setting default-storageclass=true in profile "ha-174036"
	I0725 17:45:54.175974   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:45:54.175959   23738 addons.go:69] Setting storage-provisioner=true in profile "ha-174036"
	I0725 17:45:54.175989   23738 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174036"
	I0725 17:45:54.176007   23738 addons.go:234] Setting addon storage-provisioner=true in "ha-174036"
	I0725 17:45:54.176045   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:45:54.176079   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:54.176400   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.176421   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.176432   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.176436   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.191504   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0725 17:45:54.191727   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0725 17:45:54.191938   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.192033   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.192459   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.192483   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.192590   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.192612   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.192864   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.192969   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.193146   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.193385   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.193414   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.195361   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:45:54.195619   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 17:45:54.196046   23738 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 17:45:54.196183   23738 addons.go:234] Setting addon default-storageclass=true in "ha-174036"
	I0725 17:45:54.196220   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:45:54.196511   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.196536   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.209293   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0725 17:45:54.209809   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.210326   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.210350   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.210787   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.211030   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.211088   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33643
	I0725 17:45:54.211466   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.211847   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.211870   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.212266   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.212733   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:54.212783   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:54.213029   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:54.215301   23738 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 17:45:54.216768   23738 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:45:54.216784   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 17:45:54.216797   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:54.219959   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.220356   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:54.220383   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.220561   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:54.220740   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:54.220905   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:54.221059   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:54.227793   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0725 17:45:54.228108   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:54.228544   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:54.228561   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:54.228827   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:54.229009   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:45:54.230303   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:45:54.230484   23738 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 17:45:54.230501   23738 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 17:45:54.230515   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:45:54.233106   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.233499   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:45:54.233532   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:45:54.233692   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:45:54.233854   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:45:54.233995   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:45:54.234118   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:45:54.354183   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 17:45:54.367225   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 17:45:54.369226   23738 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 17:45:54.685724   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.685745   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.686003   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.686018   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.686028   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.686035   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.686267   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.686281   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.686392   23738 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0725 17:45:54.686403   23738 round_trippers.go:469] Request Headers:
	I0725 17:45:54.686413   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:45:54.686418   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:45:54.706035   23738 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0725 17:45:54.706509   23738 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0725 17:45:54.706522   23738 round_trippers.go:469] Request Headers:
	I0725 17:45:54.706529   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:45:54.706536   23738 round_trippers.go:473]     Content-Type: application/json
	I0725 17:45:54.706539   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:45:54.717026   23738 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0725 17:45:54.717173   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:54.717187   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:54.717481   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:54.717499   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:54.942115   23738 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 17:45:55.149587   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:55.149607   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:55.149910   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:55.149925   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:55.149934   23738 main.go:141] libmachine: Making call to close driver server
	I0725 17:45:55.149942   23738 main.go:141] libmachine: (ha-174036) Calling .Close
	I0725 17:45:55.150235   23738 main.go:141] libmachine: (ha-174036) DBG | Closing plugin on server side
	I0725 17:45:55.150278   23738 main.go:141] libmachine: Successfully made call to close driver server
	I0725 17:45:55.150294   23738 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 17:45:55.152016   23738 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 17:45:55.153312   23738 addons.go:510] duration metric: took 977.418617ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 17:45:55.153351   23738 start.go:246] waiting for cluster config update ...
	I0725 17:45:55.153365   23738 start.go:255] writing updated cluster config ...
	I0725 17:45:55.155344   23738 out.go:177] 
	I0725 17:45:55.157105   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:45:55.157226   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:55.158931   23738 out.go:177] * Starting "ha-174036-m02" control-plane node in "ha-174036" cluster
	I0725 17:45:55.160055   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:45:55.160080   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:45:55.160161   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:45:55.160175   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:45:55.160244   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:45:55.160438   23738 start.go:360] acquireMachinesLock for ha-174036-m02: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:45:55.160485   23738 start.go:364] duration metric: took 27.238µs to acquireMachinesLock for "ha-174036-m02"
	I0725 17:45:55.160500   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:45:55.160569   23738 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0725 17:45:55.162033   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:45:55.162112   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:45:55.162135   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:45:55.178063   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0725 17:45:55.178487   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:45:55.178904   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:45:55.178922   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:45:55.179234   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:45:55.179434   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:45:55.179626   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:45:55.179861   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:45:55.179884   23738 client.go:168] LocalClient.Create starting
	I0725 17:45:55.179923   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:45:55.179959   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:55.179976   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:55.180041   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:45:55.180063   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:45:55.180079   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:45:55.180106   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:45:55.180118   23738 main.go:141] libmachine: (ha-174036-m02) Calling .PreCreateCheck
	I0725 17:45:55.180360   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:45:55.180759   23738 main.go:141] libmachine: Creating machine...
	I0725 17:45:55.180773   23738 main.go:141] libmachine: (ha-174036-m02) Calling .Create
	I0725 17:45:55.180930   23738 main.go:141] libmachine: (ha-174036-m02) Creating KVM machine...
	I0725 17:45:55.182197   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found existing default KVM network
	I0725 17:45:55.182309   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found existing private KVM network mk-ha-174036
	I0725 17:45:55.182439   23738 main.go:141] libmachine: (ha-174036-m02) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 ...
	I0725 17:45:55.182465   23738 main.go:141] libmachine: (ha-174036-m02) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:45:55.182515   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.182426   24141 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:55.182612   23738 main.go:141] libmachine: (ha-174036-m02) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:45:55.426913   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.426797   24141 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa...
	I0725 17:45:55.616429   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.616299   24141 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/ha-174036-m02.rawdisk...
	I0725 17:45:55.616456   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Writing magic tar header
	I0725 17:45:55.616467   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Writing SSH key tar header
	I0725 17:45:55.616479   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:55.616441   24141 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 ...
	I0725 17:45:55.616610   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02
	I0725 17:45:55.616641   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02 (perms=drwx------)
	I0725 17:45:55.616651   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:45:55.616666   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:45:55.616676   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:45:55.616687   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:45:55.616698   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:45:55.616709   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Checking permissions on dir: /home
	I0725 17:45:55.616721   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Skipping /home - not owner
	I0725 17:45:55.616734   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:45:55.616747   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:45:55.616767   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:45:55.616784   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:45:55.616794   23738 main.go:141] libmachine: (ha-174036-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:45:55.616802   23738 main.go:141] libmachine: (ha-174036-m02) Creating domain...
	I0725 17:45:55.617606   23738 main.go:141] libmachine: (ha-174036-m02) define libvirt domain using xml: 
	I0725 17:45:55.617630   23738 main.go:141] libmachine: (ha-174036-m02) <domain type='kvm'>
	I0725 17:45:55.617641   23738 main.go:141] libmachine: (ha-174036-m02)   <name>ha-174036-m02</name>
	I0725 17:45:55.617652   23738 main.go:141] libmachine: (ha-174036-m02)   <memory unit='MiB'>2200</memory>
	I0725 17:45:55.617661   23738 main.go:141] libmachine: (ha-174036-m02)   <vcpu>2</vcpu>
	I0725 17:45:55.617668   23738 main.go:141] libmachine: (ha-174036-m02)   <features>
	I0725 17:45:55.617678   23738 main.go:141] libmachine: (ha-174036-m02)     <acpi/>
	I0725 17:45:55.617685   23738 main.go:141] libmachine: (ha-174036-m02)     <apic/>
	I0725 17:45:55.617697   23738 main.go:141] libmachine: (ha-174036-m02)     <pae/>
	I0725 17:45:55.617705   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.617714   23738 main.go:141] libmachine: (ha-174036-m02)   </features>
	I0725 17:45:55.617720   23738 main.go:141] libmachine: (ha-174036-m02)   <cpu mode='host-passthrough'>
	I0725 17:45:55.617739   23738 main.go:141] libmachine: (ha-174036-m02)   
	I0725 17:45:55.617750   23738 main.go:141] libmachine: (ha-174036-m02)   </cpu>
	I0725 17:45:55.617756   23738 main.go:141] libmachine: (ha-174036-m02)   <os>
	I0725 17:45:55.617764   23738 main.go:141] libmachine: (ha-174036-m02)     <type>hvm</type>
	I0725 17:45:55.617776   23738 main.go:141] libmachine: (ha-174036-m02)     <boot dev='cdrom'/>
	I0725 17:45:55.617782   23738 main.go:141] libmachine: (ha-174036-m02)     <boot dev='hd'/>
	I0725 17:45:55.617788   23738 main.go:141] libmachine: (ha-174036-m02)     <bootmenu enable='no'/>
	I0725 17:45:55.617795   23738 main.go:141] libmachine: (ha-174036-m02)   </os>
	I0725 17:45:55.617800   23738 main.go:141] libmachine: (ha-174036-m02)   <devices>
	I0725 17:45:55.617807   23738 main.go:141] libmachine: (ha-174036-m02)     <disk type='file' device='cdrom'>
	I0725 17:45:55.617816   23738 main.go:141] libmachine: (ha-174036-m02)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/boot2docker.iso'/>
	I0725 17:45:55.617826   23738 main.go:141] libmachine: (ha-174036-m02)       <target dev='hdc' bus='scsi'/>
	I0725 17:45:55.617831   23738 main.go:141] libmachine: (ha-174036-m02)       <readonly/>
	I0725 17:45:55.617838   23738 main.go:141] libmachine: (ha-174036-m02)     </disk>
	I0725 17:45:55.617844   23738 main.go:141] libmachine: (ha-174036-m02)     <disk type='file' device='disk'>
	I0725 17:45:55.617853   23738 main.go:141] libmachine: (ha-174036-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:45:55.617866   23738 main.go:141] libmachine: (ha-174036-m02)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/ha-174036-m02.rawdisk'/>
	I0725 17:45:55.617874   23738 main.go:141] libmachine: (ha-174036-m02)       <target dev='hda' bus='virtio'/>
	I0725 17:45:55.617880   23738 main.go:141] libmachine: (ha-174036-m02)     </disk>
	I0725 17:45:55.617887   23738 main.go:141] libmachine: (ha-174036-m02)     <interface type='network'>
	I0725 17:45:55.617904   23738 main.go:141] libmachine: (ha-174036-m02)       <source network='mk-ha-174036'/>
	I0725 17:45:55.617923   23738 main.go:141] libmachine: (ha-174036-m02)       <model type='virtio'/>
	I0725 17:45:55.617936   23738 main.go:141] libmachine: (ha-174036-m02)     </interface>
	I0725 17:45:55.617945   23738 main.go:141] libmachine: (ha-174036-m02)     <interface type='network'>
	I0725 17:45:55.617951   23738 main.go:141] libmachine: (ha-174036-m02)       <source network='default'/>
	I0725 17:45:55.617959   23738 main.go:141] libmachine: (ha-174036-m02)       <model type='virtio'/>
	I0725 17:45:55.617964   23738 main.go:141] libmachine: (ha-174036-m02)     </interface>
	I0725 17:45:55.617971   23738 main.go:141] libmachine: (ha-174036-m02)     <serial type='pty'>
	I0725 17:45:55.617978   23738 main.go:141] libmachine: (ha-174036-m02)       <target port='0'/>
	I0725 17:45:55.617987   23738 main.go:141] libmachine: (ha-174036-m02)     </serial>
	I0725 17:45:55.618006   23738 main.go:141] libmachine: (ha-174036-m02)     <console type='pty'>
	I0725 17:45:55.618022   23738 main.go:141] libmachine: (ha-174036-m02)       <target type='serial' port='0'/>
	I0725 17:45:55.618033   23738 main.go:141] libmachine: (ha-174036-m02)     </console>
	I0725 17:45:55.618040   23738 main.go:141] libmachine: (ha-174036-m02)     <rng model='virtio'>
	I0725 17:45:55.618053   23738 main.go:141] libmachine: (ha-174036-m02)       <backend model='random'>/dev/random</backend>
	I0725 17:45:55.618061   23738 main.go:141] libmachine: (ha-174036-m02)     </rng>
	I0725 17:45:55.618067   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.618076   23738 main.go:141] libmachine: (ha-174036-m02)     
	I0725 17:45:55.618081   23738 main.go:141] libmachine: (ha-174036-m02)   </devices>
	I0725 17:45:55.618088   23738 main.go:141] libmachine: (ha-174036-m02) </domain>
	I0725 17:45:55.618107   23738 main.go:141] libmachine: (ha-174036-m02) 
	I0725 17:45:55.624823   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:4a:ce:b8 in network default
	I0725 17:45:55.625389   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring networks are active...
	I0725 17:45:55.625409   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:55.626160   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring network default is active
	I0725 17:45:55.626581   23738 main.go:141] libmachine: (ha-174036-m02) Ensuring network mk-ha-174036 is active
	I0725 17:45:55.626937   23738 main.go:141] libmachine: (ha-174036-m02) Getting domain xml...
	I0725 17:45:55.627612   23738 main.go:141] libmachine: (ha-174036-m02) Creating domain...
	I0725 17:45:56.833602   23738 main.go:141] libmachine: (ha-174036-m02) Waiting to get IP...
	I0725 17:45:56.834339   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:56.834770   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:56.834797   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:56.834744   24141 retry.go:31] will retry after 234.358388ms: waiting for machine to come up
	I0725 17:45:57.071228   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.071666   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.071728   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.071637   24141 retry.go:31] will retry after 238.148169ms: waiting for machine to come up
	I0725 17:45:57.311048   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.311519   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.311545   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.311472   24141 retry.go:31] will retry after 312.220932ms: waiting for machine to come up
	I0725 17:45:57.624808   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:57.625230   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:57.625256   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:57.625189   24141 retry.go:31] will retry after 519.906509ms: waiting for machine to come up
	I0725 17:45:58.146508   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:58.146952   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:58.146978   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:58.146918   24141 retry.go:31] will retry after 486.541786ms: waiting for machine to come up
	I0725 17:45:58.634623   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:58.635069   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:58.635101   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:58.635014   24141 retry.go:31] will retry after 628.549445ms: waiting for machine to come up
	I0725 17:45:59.265330   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:45:59.265799   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:45:59.265824   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:45:59.265762   24141 retry.go:31] will retry after 770.991951ms: waiting for machine to come up
	I0725 17:46:00.038570   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:00.038986   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:00.039023   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:00.038936   24141 retry.go:31] will retry after 901.347868ms: waiting for machine to come up
	I0725 17:46:00.941394   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:00.941889   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:00.941911   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:00.941846   24141 retry.go:31] will retry after 1.713993666s: waiting for machine to come up
	I0725 17:46:02.657596   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:02.657983   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:02.658001   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:02.657942   24141 retry.go:31] will retry after 1.578532576s: waiting for machine to come up
	I0725 17:46:04.238727   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:04.239149   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:04.239181   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:04.239088   24141 retry.go:31] will retry after 2.686856273s: waiting for machine to come up
	I0725 17:46:06.928339   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:06.928828   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:06.928853   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:06.928780   24141 retry.go:31] will retry after 3.150698622s: waiting for machine to come up
	I0725 17:46:10.082964   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:10.083347   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find current IP address of domain ha-174036-m02 in network mk-ha-174036
	I0725 17:46:10.083370   23738 main.go:141] libmachine: (ha-174036-m02) DBG | I0725 17:46:10.083303   24141 retry.go:31] will retry after 4.376886346s: waiting for machine to come up
	I0725 17:46:14.461253   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.461676   23738 main.go:141] libmachine: (ha-174036-m02) Found IP for machine: 192.168.39.197
	I0725 17:46:14.461708   23738 main.go:141] libmachine: (ha-174036-m02) Reserving static IP address...
	I0725 17:46:14.461723   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has current primary IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.462099   23738 main.go:141] libmachine: (ha-174036-m02) DBG | unable to find host DHCP lease matching {name: "ha-174036-m02", mac: "52:54:00:75:a1:05", ip: "192.168.39.197"} in network mk-ha-174036
	I0725 17:46:14.534623   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Getting to WaitForSSH function...
	I0725 17:46:14.534653   23738 main.go:141] libmachine: (ha-174036-m02) Reserved static IP address: 192.168.39.197
	I0725 17:46:14.534667   23738 main.go:141] libmachine: (ha-174036-m02) Waiting for SSH to be available...
	I0725 17:46:14.537445   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.537846   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:minikube Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.537886   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.537940   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using SSH client type: external
	I0725 17:46:14.538017   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa (-rw-------)
	I0725 17:46:14.538053   23738 main.go:141] libmachine: (ha-174036-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.197 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:46:14.538071   23738 main.go:141] libmachine: (ha-174036-m02) DBG | About to run SSH command:
	I0725 17:46:14.538085   23738 main.go:141] libmachine: (ha-174036-m02) DBG | exit 0
	I0725 17:46:14.660284   23738 main.go:141] libmachine: (ha-174036-m02) DBG | SSH cmd err, output: <nil>: 
	I0725 17:46:14.660574   23738 main.go:141] libmachine: (ha-174036-m02) KVM machine creation complete!
	I0725 17:46:14.660853   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:46:14.661411   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:14.661599   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:14.661789   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:46:14.661811   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 17:46:14.663133   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:46:14.663147   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:46:14.663153   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:46:14.663159   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.665750   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.666199   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.666223   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.666369   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.666564   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.666722   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.666860   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.667015   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.667200   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.667211   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:46:14.771419   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:46:14.771452   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:46:14.771464   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.774340   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.774722   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.774745   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.774908   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.775102   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.775329   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.775482   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.775653   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.775849   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.775859   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:46:14.880994   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:46:14.881057   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:46:14.881064   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:46:14.881071   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:14.881308   23738 buildroot.go:166] provisioning hostname "ha-174036-m02"
	I0725 17:46:14.881339   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:14.881508   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:14.884038   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.884377   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:14.884403   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:14.884527   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:14.884695   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.884883   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:14.885101   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:14.885297   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:14.885450   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:14.885462   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036-m02 && echo "ha-174036-m02" | sudo tee /etc/hostname
	I0725 17:46:15.012269   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036-m02
	
	I0725 17:46:15.012289   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.015465   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.015835   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.015865   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.016043   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.016222   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.016427   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.016571   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.016789   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.016964   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.016983   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:46:15.133761   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:46:15.133787   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:46:15.133810   23738 buildroot.go:174] setting up certificates
	I0725 17:46:15.133822   23738 provision.go:84] configureAuth start
	I0725 17:46:15.133832   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetMachineName
	I0725 17:46:15.134145   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:15.136827   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.137173   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.137201   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.137333   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.139909   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.140213   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.140231   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.140417   23738 provision.go:143] copyHostCerts
	I0725 17:46:15.140453   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:46:15.140492   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:46:15.140506   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:46:15.140625   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:46:15.140723   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:46:15.140749   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:46:15.140760   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:46:15.140806   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:46:15.140870   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:46:15.140897   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:46:15.140906   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:46:15.140939   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:46:15.141008   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036-m02 san=[127.0.0.1 192.168.39.197 ha-174036-m02 localhost minikube]
	I0725 17:46:15.336606   23738 provision.go:177] copyRemoteCerts
	I0725 17:46:15.336663   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:46:15.336687   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.339533   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.339895   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.339920   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.340156   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.340367   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.340574   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.340723   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:15.422722   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:46:15.422793   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:46:15.445735   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:46:15.445806   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:46:15.467773   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:46:15.467840   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:46:15.490131   23738 provision.go:87] duration metric: took 356.296388ms to configureAuth
	I0725 17:46:15.490157   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:46:15.490334   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:15.490444   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.493199   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.493589   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.493609   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.493798   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.494074   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.494309   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.494432   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.494584   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.494737   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.494750   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:46:15.757132   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:46:15.757160   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:46:15.757170   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetURL
	I0725 17:46:15.758549   23738 main.go:141] libmachine: (ha-174036-m02) DBG | Using libvirt version 6000000
	I0725 17:46:15.760634   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.761094   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.761124   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.761298   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:46:15.761330   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:46:15.761338   23738 client.go:171] duration metric: took 20.581445856s to LocalClient.Create
	I0725 17:46:15.761362   23738 start.go:167] duration metric: took 20.581502574s to libmachine.API.Create "ha-174036"
	I0725 17:46:15.761373   23738 start.go:293] postStartSetup for "ha-174036-m02" (driver="kvm2")
	I0725 17:46:15.761389   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:46:15.761408   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:15.761654   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:46:15.761677   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.763657   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.764015   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.764043   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.764202   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.764422   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.764624   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.764793   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:15.850065   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:46:15.853948   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:46:15.853971   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:46:15.854038   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:46:15.854132   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:46:15.854143   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:46:15.854223   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:46:15.862786   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:46:15.884269   23738 start.go:296] duration metric: took 122.879764ms for postStartSetup
	I0725 17:46:15.884355   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetConfigRaw
	I0725 17:46:15.884906   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:15.887535   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.887914   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.887941   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.888133   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:46:15.888362   23738 start.go:128] duration metric: took 20.727779703s to createHost
	I0725 17:46:15.888388   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:15.890674   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.891037   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:15.891059   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:15.891178   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:15.891371   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.891542   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:15.891677   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:15.891827   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:46:15.891974   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.197 22 <nil> <nil>}
	I0725 17:46:15.891983   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:46:15.996783   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929575.968778073
	
	I0725 17:46:15.996810   23738 fix.go:216] guest clock: 1721929575.968778073
	I0725 17:46:15.996820   23738 fix.go:229] Guest: 2024-07-25 17:46:15.968778073 +0000 UTC Remote: 2024-07-25 17:46:15.888376977 +0000 UTC m=+75.572426032 (delta=80.401096ms)
	I0725 17:46:15.996844   23738 fix.go:200] guest clock delta is within tolerance: 80.401096ms
	I0725 17:46:15.996852   23738 start.go:83] releasing machines lock for "ha-174036-m02", held for 20.836357411s
	I0725 17:46:15.996877   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:15.997122   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:16.000081   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.000525   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.000544   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.003289   23738 out.go:177] * Found network options:
	I0725 17:46:16.004808   23738 out.go:177]   - NO_PROXY=192.168.39.165
	W0725 17:46:16.006215   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:46:16.006249   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.006788   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.006983   23738 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 17:46:16.007083   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:46:16.007126   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	W0725 17:46:16.007151   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:46:16.007228   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:46:16.007261   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 17:46:16.009867   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.009943   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010280   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.010308   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010344   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:16.010365   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:16.010452   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:16.010603   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 17:46:16.010661   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:16.010843   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 17:46:16.010862   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:16.011027   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 17:46:16.011021   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:16.011170   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 17:46:16.244742   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:46:16.251126   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:46:16.251186   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:46:16.266040   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:46:16.266061   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:46:16.266121   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:46:16.280925   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:46:16.295199   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:46:16.295262   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:46:16.308431   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:46:16.322356   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:46:16.432768   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:46:16.569678   23738 docker.go:233] disabling docker service ...
	I0725 17:46:16.569759   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:46:16.593695   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:46:16.605656   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:46:16.749283   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:46:16.867731   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:46:16.881317   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:46:16.897749   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:46:16.897798   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.906943   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:46:16.906988   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.916138   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.925103   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.934217   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:46:16.943712   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.952891   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.968195   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:46:16.977374   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:46:16.985563   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:46:16.985623   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:46:16.997634   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:46:17.006156   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:17.119293   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:46:17.251508   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:46:17.251585   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:46:17.256424   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:46:17.256491   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:46:17.259983   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:46:17.297168   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:46:17.297244   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:46:17.324368   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:46:17.352839   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:46:17.354140   23738 out.go:177]   - env NO_PROXY=192.168.39.165
	I0725 17:46:17.355459   23738 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 17:46:17.358126   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:17.358444   23738 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:46:09 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 17:46:17.358472   23738 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 17:46:17.358653   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:46:17.362321   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:46:17.373370   23738 mustload.go:65] Loading cluster: ha-174036
	I0725 17:46:17.373563   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:17.373796   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:17.373822   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:17.388382   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0725 17:46:17.388767   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:17.389179   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:17.389197   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:17.389473   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:17.389711   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:46:17.391333   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:46:17.391662   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:17.391686   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:17.405579   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
	I0725 17:46:17.405971   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:17.406393   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:17.406415   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:17.406700   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:17.406878   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:46:17.407016   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.197
	I0725 17:46:17.407090   23738 certs.go:194] generating shared ca certs ...
	I0725 17:46:17.407127   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.407260   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:46:17.407323   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:46:17.407334   23738 certs.go:256] generating profile certs ...
	I0725 17:46:17.407402   23738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:46:17.407429   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc
	I0725 17:46:17.407444   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.254]
	I0725 17:46:17.543040   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc ...
	I0725 17:46:17.543066   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc: {Name:mkeb95191f3396f0d9f7d26e0743c170c184b50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.543224   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc ...
	I0725 17:46:17.543238   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc: {Name:mk37c7f5246913dc22856aece47c3693a6ee3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:46:17.543312   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.10b823cc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:46:17.543432   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.10b823cc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:46:17.543550   23738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:46:17.543564   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:46:17.543576   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:46:17.543588   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:46:17.543601   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:46:17.543612   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:46:17.543625   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:46:17.543637   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:46:17.543649   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:46:17.543690   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:46:17.543717   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:46:17.543726   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:46:17.543754   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:46:17.543774   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:46:17.543794   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:46:17.543827   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:46:17.543854   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:46:17.543867   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:46:17.543879   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:17.543908   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:46:17.546947   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:17.547426   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:46:17.547451   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:17.547658   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:46:17.547838   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:46:17.547993   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:46:17.548123   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:46:17.620690   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 17:46:17.625937   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 17:46:17.638090   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 17:46:17.642037   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 17:46:17.653544   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 17:46:17.658197   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 17:46:17.669081   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 17:46:17.673329   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0725 17:46:17.683670   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 17:46:17.687844   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 17:46:17.698859   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 17:46:17.702629   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0725 17:46:17.712623   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:46:17.738164   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:46:17.762656   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:46:17.787275   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:46:17.811370   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 17:46:17.835472   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:46:17.859639   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:46:17.883559   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:46:17.907908   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:46:17.932502   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:46:17.956867   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:46:17.981160   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 17:46:17.997763   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 17:46:18.014729   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 17:46:18.031547   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0725 17:46:18.047855   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 17:46:18.063794   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0725 17:46:18.079112   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 17:46:18.094576   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:46:18.099711   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:46:18.108985   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.113038   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.113079   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:46:18.118165   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:46:18.127360   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:46:18.136748   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.140558   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.140602   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:46:18.145565   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:46:18.154715   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:46:18.164195   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.168088   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.168128   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:46:18.173350   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:46:18.182613   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:46:18.186312   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:46:18.186363   23738 kubeadm.go:934] updating node {m02 192.168.39.197 8443 v1.30.3 crio true true} ...
	I0725 17:46:18.186449   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:46:18.186473   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:46:18.186501   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:46:18.201233   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:46:18.201313   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:46:18.201359   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:46:18.209774   23738 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0725 17:46:18.209876   23738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0725 17:46:18.218405   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0725 17:46:18.218430   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:46:18.218435   23738 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0725 17:46:18.218451   23738 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0725 17:46:18.218487   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:46:18.222370   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0725 17:46:18.222396   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0725 17:46:19.022050   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:46:19.022121   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:46:19.026967   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0725 17:46:19.026999   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0725 17:46:19.297222   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:46:19.310991   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:46:19.311077   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:46:19.314911   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0725 17:46:19.314948   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0725 17:46:19.677180   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 17:46:19.685985   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:46:19.702337   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:46:19.724287   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:46:19.739482   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:46:19.743069   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:46:19.754007   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:19.859835   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:46:19.874970   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:46:19.875451   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:46:19.875502   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:46:19.890713   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0725 17:46:19.891155   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:46:19.891608   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:46:19.891637   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:46:19.891975   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:46:19.892175   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:46:19.892362   23738 start.go:317] joinCluster: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:46:19.892452   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0725 17:46:19.892468   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:46:19.895393   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:19.895800   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:46:19.895829   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:46:19.895944   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:46:19.896093   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:46:19.896227   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:46:19.896407   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:46:20.043032   23738 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:46:20.043070   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token egdkav.g7g6hnq2ok6nvfh4 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m02 --control-plane --apiserver-advertise-address=192.168.39.197 --apiserver-bind-port=8443"
	I0725 17:46:43.362278   23738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token egdkav.g7g6hnq2ok6nvfh4 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m02 --control-plane --apiserver-advertise-address=192.168.39.197 --apiserver-bind-port=8443": (23.319185275s)
	I0725 17:46:43.362316   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0725 17:46:43.952764   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036-m02 minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=false
	I0725 17:46:44.064206   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174036-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0725 17:46:44.172745   23738 start.go:319] duration metric: took 24.280379011s to joinCluster
	I0725 17:46:44.172813   23738 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:46:44.173079   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:46:44.174256   23738 out.go:177] * Verifying Kubernetes components...
	I0725 17:46:44.175431   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:46:44.432392   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:46:44.486204   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:46:44.486472   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 17:46:44.486531   23738 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0725 17:46:44.486749   23738 node_ready.go:35] waiting up to 6m0s for node "ha-174036-m02" to be "Ready" ...
	I0725 17:46:44.486862   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:44.486872   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:44.486883   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:44.486890   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:44.498977   23738 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0725 17:46:44.987457   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:44.987477   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:44.987485   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:44.987488   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:44.991792   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:45.487749   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:45.487767   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:45.487775   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:45.487779   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:45.506677   23738 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0725 17:46:45.987641   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:45.987660   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:45.987667   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:45.987671   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:45.990820   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:46.487804   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:46.487830   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:46.487841   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:46.487847   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:46.490960   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:46.491598   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:46.986998   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:46.987020   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:46.987031   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:46.987037   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:46.992238   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:46:47.487433   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:47.487456   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:47.487464   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:47.487469   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:47.490809   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:47.986945   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:47.986966   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:47.986978   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:47.986985   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:47.990837   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:48.487058   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:48.487079   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:48.487088   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:48.487091   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:48.490859   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:48.491656   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:48.987057   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:48.987078   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:48.987086   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:48.987090   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:48.990291   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:49.487142   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:49.487161   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:49.487169   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:49.487177   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:49.490373   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:49.987828   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:49.987849   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:49.987857   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:49.987861   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.077985   23738 round_trippers.go:574] Response Status: 200 OK in 90 milliseconds
	I0725 17:46:50.487105   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:50.487137   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:50.487144   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:50.487148   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.490951   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:50.987897   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:50.987917   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:50.987925   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:50.987928   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:50.991037   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:50.991685   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:51.486930   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:51.486949   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:51.486956   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:51.486961   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:51.490311   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:51.987313   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:51.987344   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:51.987355   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:51.987361   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:51.990610   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:52.487171   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:52.487197   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:52.487216   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:52.487222   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:52.490341   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:52.987331   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:52.987353   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:52.987361   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:52.987366   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:52.990592   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:53.487689   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:53.487711   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:53.487719   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:53.487723   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:53.490971   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:53.491391   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:53.987827   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:53.987848   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:53.987856   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:53.987861   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:53.990984   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:54.487464   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:54.487486   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:54.487495   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:54.487499   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:54.490978   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:54.986989   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:54.987013   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:54.987021   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:54.987024   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:54.990625   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:55.487831   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:55.487858   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:55.487869   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:55.487876   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:55.491327   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:55.491818   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:55.987146   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:55.987166   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:55.987175   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:55.987179   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:55.990574   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:56.487954   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:56.487976   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:56.487984   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:56.487989   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:56.491289   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:56.987923   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:56.987945   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:56.987955   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:56.987960   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:56.991204   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:57.487631   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:57.487651   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:57.487659   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:57.487666   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:57.490533   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:46:57.987588   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:57.987612   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:57.987620   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:57.987624   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:57.990687   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:57.991285   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:46:58.487618   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:58.487639   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:58.487647   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:58.487651   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:58.490842   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:46:58.987837   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:58.987856   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:58.987864   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:58.987870   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:58.990761   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:46:59.487374   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:59.487394   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:59.487403   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:59.487406   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:59.492401   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:59.987374   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:46:59.987393   23738 round_trippers.go:469] Request Headers:
	I0725 17:46:59.987406   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:46:59.987410   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:46:59.991439   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:46:59.992125   23738 node_ready.go:53] node "ha-174036-m02" has status "Ready":"False"
	I0725 17:47:00.487741   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:00.487762   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:00.487770   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:00.487774   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:00.491187   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:00.987289   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:00.987315   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:00.987323   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:00.987326   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:00.990709   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.487594   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:01.487619   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.487626   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.487630   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.491124   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.987142   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:01.987168   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.987178   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.987183   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.990992   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:01.991771   23738 node_ready.go:49] node "ha-174036-m02" has status "Ready":"True"
	I0725 17:47:01.991787   23738 node_ready.go:38] duration metric: took 17.505006515s for node "ha-174036-m02" to be "Ready" ...
	I0725 17:47:01.991795   23738 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:47:01.991849   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:01.991857   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:01.991864   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:01.991868   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:01.997924   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:47:02.004622   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.004712   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-flblg
	I0725 17:47:02.004723   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.004733   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.004740   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.007973   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.008557   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.008570   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.008577   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.008580   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.011288   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.011925   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.011942   23738 pod_ready.go:81] duration metric: took 7.296597ms for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.011950   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.011993   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vtr9p
	I0725 17:47:02.012000   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.012006   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.012011   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.014637   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.015210   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.015224   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.015232   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.015237   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.017977   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.018537   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.018552   23738 pod_ready.go:81] duration metric: took 6.596031ms for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.018563   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.018615   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036
	I0725 17:47:02.018627   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.018636   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.018642   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.021772   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.022544   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.022558   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.022570   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.022576   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.025266   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.026133   23738 pod_ready.go:92] pod "etcd-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.026146   23738 pod_ready.go:81] duration metric: took 7.576965ms for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.026154   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.026193   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m02
	I0725 17:47:02.026200   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.026206   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.026209   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.028923   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.029717   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.029731   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.029742   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.029748   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.032160   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:02.032657   23738 pod_ready.go:92] pod "etcd-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.032673   23738 pod_ready.go:81] duration metric: took 6.513801ms for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.032693   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.188058   23738 request.go:629] Waited for 155.306844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:47:02.188125   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:47:02.188131   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.188139   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.188145   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.191508   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.388043   23738 request.go:629] Waited for 194.375732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.388137   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:02.388145   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.388157   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.388168   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.391817   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.392515   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.392550   23738 pod_ready.go:81] duration metric: took 359.843486ms for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.392569   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.587205   23738 request.go:629] Waited for 194.495232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:47:02.587262   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:47:02.587267   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.587276   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.587281   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.590717   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.787801   23738 request.go:629] Waited for 196.393939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.787858   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:02.787863   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.787871   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.787877   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.791014   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:02.791717   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:02.791736   23738 pod_ready.go:81] duration metric: took 399.154295ms for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.791748   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:02.987879   23738 request.go:629] Waited for 196.064166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:47:02.987931   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:47:02.987936   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:02.987943   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:02.987948   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:02.991504   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.187658   23738 request.go:629] Waited for 195.368899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.187737   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.187746   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.187758   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.187767   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.190876   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.191453   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.191473   23738 pod_ready.go:81] duration metric: took 399.71658ms for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.191487   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.387434   23738 request.go:629] Waited for 195.878721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:47:03.387500   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:47:03.387505   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.387513   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.387518   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.390724   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.587718   23738 request.go:629] Waited for 196.356735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:03.587785   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:03.587790   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.587798   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.587801   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.590730   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:03.591204   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.591224   23738 pod_ready.go:81] duration metric: took 399.728826ms for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.591241   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.787199   23738 request.go:629] Waited for 195.760729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:47:03.787258   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:47:03.787265   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.787276   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.787284   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.790752   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.987529   23738 request.go:629] Waited for 196.300522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.987598   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:03.987604   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:03.987612   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:03.987616   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:03.990728   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:03.991455   23738 pod_ready.go:92] pod "kube-proxy-s6jdn" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:03.991476   23738 pod_ready.go:81] duration metric: took 400.22747ms for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:03.991488   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.187504   23738 request.go:629] Waited for 195.922258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:47:04.187573   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:47:04.187581   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.187593   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.187603   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.190592   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:47:04.387152   23738 request.go:629] Waited for 195.96497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:04.387227   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:04.387233   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.387241   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.387246   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.390491   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.391017   23738 pod_ready.go:92] pod "kube-proxy-xwvdm" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:04.391033   23738 pod_ready.go:81] duration metric: took 399.537258ms for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.391045   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.587153   23738 request.go:629] Waited for 196.034043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:47:04.587216   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:47:04.587222   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.587230   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.587234   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.590405   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.787653   23738 request.go:629] Waited for 196.383457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:04.787704   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:47:04.787709   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.787717   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.787721   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.790933   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:04.791508   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:04.791530   23738 pod_ready.go:81] duration metric: took 400.476886ms for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.791551   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:04.988176   23738 request.go:629] Waited for 196.552995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:47:04.988251   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:47:04.988258   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:04.988265   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:04.988270   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:04.991506   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.187255   23738 request.go:629] Waited for 195.282705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:05.187325   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:47:05.187330   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.187337   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.187342   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.191136   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.192274   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:47:05.192298   23738 pod_ready.go:81] duration metric: took 400.736873ms for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:47:05.192309   23738 pod_ready.go:38] duration metric: took 3.200502465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:47:05.192352   23738 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:47:05.192410   23738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:47:05.207600   23738 api_server.go:72] duration metric: took 21.034747687s to wait for apiserver process to appear ...
	I0725 17:47:05.207629   23738 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:47:05.207654   23738 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0725 17:47:05.216095   23738 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0725 17:47:05.216153   23738 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0725 17:47:05.216160   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.216168   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.216171   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.217820   23738 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 17:47:05.217918   23738 api_server.go:141] control plane version: v1.30.3
	I0725 17:47:05.217936   23738 api_server.go:131] duration metric: took 10.299137ms to wait for apiserver health ...
	I0725 17:47:05.217946   23738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:47:05.387375   23738 request.go:629] Waited for 169.360683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.387456   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.387462   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.387472   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.387480   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.392825   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:47:05.397101   23738 system_pods.go:59] 17 kube-system pods found
	I0725 17:47:05.397124   23738 system_pods.go:61] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:47:05.397128   23738 system_pods.go:61] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:47:05.397133   23738 system_pods.go:61] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:47:05.397136   23738 system_pods.go:61] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:47:05.397139   23738 system_pods.go:61] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:47:05.397142   23738 system_pods.go:61] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:47:05.397145   23738 system_pods.go:61] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:47:05.397147   23738 system_pods.go:61] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:47:05.397150   23738 system_pods.go:61] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:47:05.397153   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:47:05.397155   23738 system_pods.go:61] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:47:05.397158   23738 system_pods.go:61] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:47:05.397160   23738 system_pods.go:61] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:47:05.397163   23738 system_pods.go:61] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:47:05.397166   23738 system_pods.go:61] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:47:05.397168   23738 system_pods.go:61] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:47:05.397171   23738 system_pods.go:61] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:47:05.397176   23738 system_pods.go:74] duration metric: took 179.224406ms to wait for pod list to return data ...
	I0725 17:47:05.397190   23738 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:47:05.587416   23738 request.go:629] Waited for 190.161381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:47:05.587517   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:47:05.587525   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.587533   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.587540   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.590849   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.591135   23738 default_sa.go:45] found service account: "default"
	I0725 17:47:05.591157   23738 default_sa.go:55] duration metric: took 193.957914ms for default service account to be created ...
	I0725 17:47:05.591167   23738 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:47:05.787604   23738 request.go:629] Waited for 196.37118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.787675   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:47:05.787683   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.787692   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.787696   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.793242   23738 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0725 17:47:05.798193   23738 system_pods.go:86] 17 kube-system pods found
	I0725 17:47:05.798219   23738 system_pods.go:89] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:47:05.798225   23738 system_pods.go:89] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:47:05.798230   23738 system_pods.go:89] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:47:05.798234   23738 system_pods.go:89] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:47:05.798238   23738 system_pods.go:89] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:47:05.798242   23738 system_pods.go:89] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:47:05.798246   23738 system_pods.go:89] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:47:05.798250   23738 system_pods.go:89] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:47:05.798255   23738 system_pods.go:89] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:47:05.798263   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:47:05.798266   23738 system_pods.go:89] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:47:05.798270   23738 system_pods.go:89] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:47:05.798275   23738 system_pods.go:89] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:47:05.798279   23738 system_pods.go:89] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:47:05.798285   23738 system_pods.go:89] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:47:05.798288   23738 system_pods.go:89] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:47:05.798291   23738 system_pods.go:89] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:47:05.798299   23738 system_pods.go:126] duration metric: took 207.125612ms to wait for k8s-apps to be running ...
	I0725 17:47:05.798307   23738 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:47:05.798359   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:47:05.812296   23738 system_svc.go:56] duration metric: took 13.974348ms WaitForService to wait for kubelet
	I0725 17:47:05.812345   23738 kubeadm.go:582] duration metric: took 21.63949505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:47:05.812372   23738 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:47:05.987744   23738 request.go:629] Waited for 175.278659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0725 17:47:05.987809   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0725 17:47:05.987816   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:05.987832   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:05.987842   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:05.991239   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:05.992165   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:47:05.992191   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:47:05.992206   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:47:05.992212   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:47:05.992220   23738 node_conditions.go:105] duration metric: took 179.836812ms to run NodePressure ...
	I0725 17:47:05.992235   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:47:05.992270   23738 start.go:255] writing updated cluster config ...
	I0725 17:47:05.994244   23738 out.go:177] 
	I0725 17:47:05.995505   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:05.995594   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:05.998012   23738 out.go:177] * Starting "ha-174036-m03" control-plane node in "ha-174036" cluster
	I0725 17:47:05.999095   23738 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:47:05.999118   23738 cache.go:56] Caching tarball of preloaded images
	I0725 17:47:05.999208   23738 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:47:05.999220   23738 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:47:05.999312   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:05.999474   23738 start.go:360] acquireMachinesLock for ha-174036-m03: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:47:05.999519   23738 start.go:364] duration metric: took 24.854µs to acquireMachinesLock for "ha-174036-m03"
	I0725 17:47:05.999541   23738 start.go:93] Provisioning new machine with config: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:05.999680   23738 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0725 17:47:06.001046   23738 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 17:47:06.001134   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:06.001175   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:06.016185   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38231
	I0725 17:47:06.016629   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:06.017035   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:06.017056   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:06.017419   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:06.017634   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:06.017758   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:06.017903   23738 start.go:159] libmachine.API.Create for "ha-174036" (driver="kvm2")
	I0725 17:47:06.017941   23738 client.go:168] LocalClient.Create starting
	I0725 17:47:06.017980   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 17:47:06.018046   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:47:06.018065   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:47:06.018115   23738 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 17:47:06.018139   23738 main.go:141] libmachine: Decoding PEM data...
	I0725 17:47:06.018150   23738 main.go:141] libmachine: Parsing certificate...
	I0725 17:47:06.018168   23738 main.go:141] libmachine: Running pre-create checks...
	I0725 17:47:06.018176   23738 main.go:141] libmachine: (ha-174036-m03) Calling .PreCreateCheck
	I0725 17:47:06.018375   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:06.018882   23738 main.go:141] libmachine: Creating machine...
	I0725 17:47:06.018897   23738 main.go:141] libmachine: (ha-174036-m03) Calling .Create
	I0725 17:47:06.019021   23738 main.go:141] libmachine: (ha-174036-m03) Creating KVM machine...
	I0725 17:47:06.020239   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found existing default KVM network
	I0725 17:47:06.020312   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found existing private KVM network mk-ha-174036
	I0725 17:47:06.020486   23738 main.go:141] libmachine: (ha-174036-m03) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 ...
	I0725 17:47:06.020515   23738 main.go:141] libmachine: (ha-174036-m03) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:47:06.020527   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.020448   24535 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:47:06.020673   23738 main.go:141] libmachine: (ha-174036-m03) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 17:47:06.243986   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.243871   24535 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa...
	I0725 17:47:06.415514   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.415394   24535 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/ha-174036-m03.rawdisk...
	I0725 17:47:06.415552   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Writing magic tar header
	I0725 17:47:06.415569   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Writing SSH key tar header
	I0725 17:47:06.415581   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:06.415502   24535 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 ...
	I0725 17:47:06.415599   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03
	I0725 17:47:06.415614   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03 (perms=drwx------)
	I0725 17:47:06.415624   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 17:47:06.415635   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:47:06.415648   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 17:47:06.415662   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 17:47:06.415678   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 17:47:06.415690   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 17:47:06.415702   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 17:47:06.415713   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home/jenkins
	I0725 17:47:06.415722   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 17:47:06.415735   23738 main.go:141] libmachine: (ha-174036-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 17:47:06.415745   23738 main.go:141] libmachine: (ha-174036-m03) Creating domain...
	I0725 17:47:06.415756   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Checking permissions on dir: /home
	I0725 17:47:06.415766   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Skipping /home - not owner
	I0725 17:47:06.416796   23738 main.go:141] libmachine: (ha-174036-m03) define libvirt domain using xml: 
	I0725 17:47:06.416815   23738 main.go:141] libmachine: (ha-174036-m03) <domain type='kvm'>
	I0725 17:47:06.416823   23738 main.go:141] libmachine: (ha-174036-m03)   <name>ha-174036-m03</name>
	I0725 17:47:06.416828   23738 main.go:141] libmachine: (ha-174036-m03)   <memory unit='MiB'>2200</memory>
	I0725 17:47:06.416834   23738 main.go:141] libmachine: (ha-174036-m03)   <vcpu>2</vcpu>
	I0725 17:47:06.416839   23738 main.go:141] libmachine: (ha-174036-m03)   <features>
	I0725 17:47:06.416845   23738 main.go:141] libmachine: (ha-174036-m03)     <acpi/>
	I0725 17:47:06.416850   23738 main.go:141] libmachine: (ha-174036-m03)     <apic/>
	I0725 17:47:06.416857   23738 main.go:141] libmachine: (ha-174036-m03)     <pae/>
	I0725 17:47:06.416862   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.416867   23738 main.go:141] libmachine: (ha-174036-m03)   </features>
	I0725 17:47:06.416872   23738 main.go:141] libmachine: (ha-174036-m03)   <cpu mode='host-passthrough'>
	I0725 17:47:06.416878   23738 main.go:141] libmachine: (ha-174036-m03)   
	I0725 17:47:06.416885   23738 main.go:141] libmachine: (ha-174036-m03)   </cpu>
	I0725 17:47:06.416891   23738 main.go:141] libmachine: (ha-174036-m03)   <os>
	I0725 17:47:06.416897   23738 main.go:141] libmachine: (ha-174036-m03)     <type>hvm</type>
	I0725 17:47:06.416903   23738 main.go:141] libmachine: (ha-174036-m03)     <boot dev='cdrom'/>
	I0725 17:47:06.416908   23738 main.go:141] libmachine: (ha-174036-m03)     <boot dev='hd'/>
	I0725 17:47:06.416913   23738 main.go:141] libmachine: (ha-174036-m03)     <bootmenu enable='no'/>
	I0725 17:47:06.416920   23738 main.go:141] libmachine: (ha-174036-m03)   </os>
	I0725 17:47:06.416925   23738 main.go:141] libmachine: (ha-174036-m03)   <devices>
	I0725 17:47:06.416932   23738 main.go:141] libmachine: (ha-174036-m03)     <disk type='file' device='cdrom'>
	I0725 17:47:06.416941   23738 main.go:141] libmachine: (ha-174036-m03)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/boot2docker.iso'/>
	I0725 17:47:06.416952   23738 main.go:141] libmachine: (ha-174036-m03)       <target dev='hdc' bus='scsi'/>
	I0725 17:47:06.416963   23738 main.go:141] libmachine: (ha-174036-m03)       <readonly/>
	I0725 17:47:06.416973   23738 main.go:141] libmachine: (ha-174036-m03)     </disk>
	I0725 17:47:06.416985   23738 main.go:141] libmachine: (ha-174036-m03)     <disk type='file' device='disk'>
	I0725 17:47:06.416995   23738 main.go:141] libmachine: (ha-174036-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 17:47:06.417006   23738 main.go:141] libmachine: (ha-174036-m03)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/ha-174036-m03.rawdisk'/>
	I0725 17:47:06.417016   23738 main.go:141] libmachine: (ha-174036-m03)       <target dev='hda' bus='virtio'/>
	I0725 17:47:06.417054   23738 main.go:141] libmachine: (ha-174036-m03)     </disk>
	I0725 17:47:06.417080   23738 main.go:141] libmachine: (ha-174036-m03)     <interface type='network'>
	I0725 17:47:06.417090   23738 main.go:141] libmachine: (ha-174036-m03)       <source network='mk-ha-174036'/>
	I0725 17:47:06.417102   23738 main.go:141] libmachine: (ha-174036-m03)       <model type='virtio'/>
	I0725 17:47:06.417129   23738 main.go:141] libmachine: (ha-174036-m03)     </interface>
	I0725 17:47:06.417150   23738 main.go:141] libmachine: (ha-174036-m03)     <interface type='network'>
	I0725 17:47:06.417165   23738 main.go:141] libmachine: (ha-174036-m03)       <source network='default'/>
	I0725 17:47:06.417177   23738 main.go:141] libmachine: (ha-174036-m03)       <model type='virtio'/>
	I0725 17:47:06.417190   23738 main.go:141] libmachine: (ha-174036-m03)     </interface>
	I0725 17:47:06.417201   23738 main.go:141] libmachine: (ha-174036-m03)     <serial type='pty'>
	I0725 17:47:06.417211   23738 main.go:141] libmachine: (ha-174036-m03)       <target port='0'/>
	I0725 17:47:06.417225   23738 main.go:141] libmachine: (ha-174036-m03)     </serial>
	I0725 17:47:06.417237   23738 main.go:141] libmachine: (ha-174036-m03)     <console type='pty'>
	I0725 17:47:06.417247   23738 main.go:141] libmachine: (ha-174036-m03)       <target type='serial' port='0'/>
	I0725 17:47:06.417257   23738 main.go:141] libmachine: (ha-174036-m03)     </console>
	I0725 17:47:06.417267   23738 main.go:141] libmachine: (ha-174036-m03)     <rng model='virtio'>
	I0725 17:47:06.417281   23738 main.go:141] libmachine: (ha-174036-m03)       <backend model='random'>/dev/random</backend>
	I0725 17:47:06.417292   23738 main.go:141] libmachine: (ha-174036-m03)     </rng>
	I0725 17:47:06.417303   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.417327   23738 main.go:141] libmachine: (ha-174036-m03)     
	I0725 17:47:06.417339   23738 main.go:141] libmachine: (ha-174036-m03)   </devices>
	I0725 17:47:06.417349   23738 main.go:141] libmachine: (ha-174036-m03) </domain>
	I0725 17:47:06.417359   23738 main.go:141] libmachine: (ha-174036-m03) 
	I0725 17:47:06.423941   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:d2:b9:6e in network default
	I0725 17:47:06.424555   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring networks are active...
	I0725 17:47:06.424587   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:06.425393   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring network default is active
	I0725 17:47:06.425810   23738 main.go:141] libmachine: (ha-174036-m03) Ensuring network mk-ha-174036 is active
	I0725 17:47:06.426261   23738 main.go:141] libmachine: (ha-174036-m03) Getting domain xml...
	I0725 17:47:06.427092   23738 main.go:141] libmachine: (ha-174036-m03) Creating domain...
	I0725 17:47:07.634394   23738 main.go:141] libmachine: (ha-174036-m03) Waiting to get IP...
	I0725 17:47:07.635375   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:07.635795   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:07.635840   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:07.635779   24535 retry.go:31] will retry after 276.28905ms: waiting for machine to come up
	I0725 17:47:07.913228   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:07.913632   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:07.913665   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:07.913587   24535 retry.go:31] will retry after 312.407761ms: waiting for machine to come up
	I0725 17:47:08.228074   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:08.228534   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:08.228559   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:08.228485   24535 retry.go:31] will retry after 351.367598ms: waiting for machine to come up
	I0725 17:47:08.581023   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:08.581512   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:08.581547   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:08.581458   24535 retry.go:31] will retry after 446.660652ms: waiting for machine to come up
	I0725 17:47:09.030021   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:09.030503   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:09.030523   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:09.030459   24535 retry.go:31] will retry after 522.331171ms: waiting for machine to come up
	I0725 17:47:09.554166   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:09.554592   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:09.554621   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:09.554549   24535 retry.go:31] will retry after 586.124916ms: waiting for machine to come up
	I0725 17:47:10.141876   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:10.142310   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:10.142341   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:10.142264   24535 retry.go:31] will retry after 1.030881544s: waiting for machine to come up
	I0725 17:47:11.175199   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:11.175672   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:11.175703   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:11.175632   24535 retry.go:31] will retry after 1.173789187s: waiting for machine to come up
	I0725 17:47:12.351103   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:12.351627   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:12.351655   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:12.351558   24535 retry.go:31] will retry after 1.456003509s: waiting for machine to come up
	I0725 17:47:13.809169   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:13.809755   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:13.809781   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:13.809690   24535 retry.go:31] will retry after 2.262366194s: waiting for machine to come up
	I0725 17:47:16.074108   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:16.074663   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:16.074705   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:16.074637   24535 retry.go:31] will retry after 1.83642278s: waiting for machine to come up
	I0725 17:47:17.913594   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:17.914068   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:17.914110   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:17.914028   24535 retry.go:31] will retry after 2.300261449s: waiting for machine to come up
	I0725 17:47:20.217284   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:20.217819   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:20.217845   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:20.217749   24535 retry.go:31] will retry after 3.900460116s: waiting for machine to come up
	I0725 17:47:24.121432   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:24.121920   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find current IP address of domain ha-174036-m03 in network mk-ha-174036
	I0725 17:47:24.121948   23738 main.go:141] libmachine: (ha-174036-m03) DBG | I0725 17:47:24.121884   24535 retry.go:31] will retry after 4.780794251s: waiting for machine to come up
	I0725 17:47:28.906153   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.906612   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has current primary IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.906651   23738 main.go:141] libmachine: (ha-174036-m03) Found IP for machine: 192.168.39.253
	I0725 17:47:28.906676   23738 main.go:141] libmachine: (ha-174036-m03) Reserving static IP address...
	I0725 17:47:28.907028   23738 main.go:141] libmachine: (ha-174036-m03) DBG | unable to find host DHCP lease matching {name: "ha-174036-m03", mac: "52:54:00:44:8c:91", ip: "192.168.39.253"} in network mk-ha-174036
	I0725 17:47:28.979167   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Getting to WaitForSSH function...
	I0725 17:47:28.979197   23738 main.go:141] libmachine: (ha-174036-m03) Reserved static IP address: 192.168.39.253
	I0725 17:47:28.979210   23738 main.go:141] libmachine: (ha-174036-m03) Waiting for SSH to be available...
	I0725 17:47:28.981966   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.982399   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:28.982424   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:28.982612   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using SSH client type: external
	I0725 17:47:28.982637   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa (-rw-------)
	I0725 17:47:28.982664   23738 main.go:141] libmachine: (ha-174036-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 17:47:28.982678   23738 main.go:141] libmachine: (ha-174036-m03) DBG | About to run SSH command:
	I0725 17:47:28.982691   23738 main.go:141] libmachine: (ha-174036-m03) DBG | exit 0
	I0725 17:47:29.104524   23738 main.go:141] libmachine: (ha-174036-m03) DBG | SSH cmd err, output: <nil>: 
	I0725 17:47:29.104792   23738 main.go:141] libmachine: (ha-174036-m03) KVM machine creation complete!
	I0725 17:47:29.105082   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:29.105588   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:29.105812   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:29.105968   23738 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 17:47:29.105982   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:47:29.107287   23738 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 17:47:29.107300   23738 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 17:47:29.107305   23738 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 17:47:29.107311   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.109674   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.110232   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.110247   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.110490   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.110674   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.110822   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.110993   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.111133   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.111379   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.111406   23738 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 17:47:29.211331   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:47:29.211353   23738 main.go:141] libmachine: Detecting the provisioner...
	I0725 17:47:29.211365   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.214126   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.214477   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.214506   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.214720   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.214934   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.215100   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.215258   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.215395   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.215555   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.215574   23738 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 17:47:29.316900   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 17:47:29.316991   23738 main.go:141] libmachine: found compatible host: buildroot
	I0725 17:47:29.317005   23738 main.go:141] libmachine: Provisioning with buildroot...
	I0725 17:47:29.317013   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.317252   23738 buildroot.go:166] provisioning hostname "ha-174036-m03"
	I0725 17:47:29.317280   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.317469   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.320169   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.320705   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.320741   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.320944   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.321149   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.321335   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.321526   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.321704   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.321855   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.321870   23738 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036-m03 && echo "ha-174036-m03" | sudo tee /etc/hostname
	I0725 17:47:29.441455   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036-m03
	
	I0725 17:47:29.441483   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.444461   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.444839   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.444855   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.445070   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.445250   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.445430   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.445615   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.445789   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.445952   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.445966   23738 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:47:29.561536   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:47:29.561568   23738 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:47:29.561586   23738 buildroot.go:174] setting up certificates
	I0725 17:47:29.561595   23738 provision.go:84] configureAuth start
	I0725 17:47:29.561607   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetMachineName
	I0725 17:47:29.561852   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:29.564773   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.565253   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.565279   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.565506   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.568428   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.568915   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.568945   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.569100   23738 provision.go:143] copyHostCerts
	I0725 17:47:29.569133   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:47:29.569171   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:47:29.569181   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:47:29.569265   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:47:29.569360   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:47:29.569384   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:47:29.569393   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:47:29.569426   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:47:29.569510   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:47:29.569539   23738 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:47:29.569548   23738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:47:29.569596   23738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:47:29.569672   23738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036-m03 san=[127.0.0.1 192.168.39.253 ha-174036-m03 localhost minikube]
	I0725 17:47:29.755228   23738 provision.go:177] copyRemoteCerts
	I0725 17:47:29.755279   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:47:29.755301   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.758170   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.758515   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.758583   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.758689   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.758879   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.759063   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.759224   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:29.837734   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:47:29.837823   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:47:29.863548   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:47:29.863610   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 17:47:29.887142   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:47:29.887207   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 17:47:29.908900   23738 provision.go:87] duration metric: took 347.291166ms to configureAuth
	I0725 17:47:29.908928   23738 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:47:29.909156   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:29.909237   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:29.912126   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.912498   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:29.912524   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:29.912744   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:29.912902   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.913051   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:29.913125   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:29.913254   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:29.913428   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:29.913447   23738 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:47:30.188871   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:47:30.188915   23738 main.go:141] libmachine: Checking connection to Docker...
	I0725 17:47:30.188927   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetURL
	I0725 17:47:30.190321   23738 main.go:141] libmachine: (ha-174036-m03) DBG | Using libvirt version 6000000
	I0725 17:47:30.192495   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.192847   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.192867   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.193018   23738 main.go:141] libmachine: Docker is up and running!
	I0725 17:47:30.193040   23738 main.go:141] libmachine: Reticulating splines...
	I0725 17:47:30.193046   23738 client.go:171] duration metric: took 24.17509551s to LocalClient.Create
	I0725 17:47:30.193077   23738 start.go:167] duration metric: took 24.175175089s to libmachine.API.Create "ha-174036"
	I0725 17:47:30.193090   23738 start.go:293] postStartSetup for "ha-174036-m03" (driver="kvm2")
	I0725 17:47:30.193103   23738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:47:30.193127   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.193342   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:47:30.193381   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.195929   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.196262   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.196286   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.196468   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.196661   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.196786   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.196934   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.274721   23738 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:47:30.278949   23738 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:47:30.278974   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:47:30.279050   23738 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:47:30.279138   23738 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:47:30.279149   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:47:30.279270   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:47:30.288261   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:47:30.311910   23738 start.go:296] duration metric: took 118.808085ms for postStartSetup
	I0725 17:47:30.311982   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetConfigRaw
	I0725 17:47:30.312607   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:30.315653   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.316044   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.316070   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.316427   23738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:47:30.316631   23738 start.go:128] duration metric: took 24.31693959s to createHost
	I0725 17:47:30.316652   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.318999   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.319393   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.319421   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.319554   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.319735   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.319887   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.320039   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.320184   23738 main.go:141] libmachine: Using SSH client type: native
	I0725 17:47:30.320394   23738 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0725 17:47:30.320407   23738 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:47:30.420797   23738 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721929650.379347023
	
	I0725 17:47:30.420830   23738 fix.go:216] guest clock: 1721929650.379347023
	I0725 17:47:30.420843   23738 fix.go:229] Guest: 2024-07-25 17:47:30.379347023 +0000 UTC Remote: 2024-07-25 17:47:30.316641621 +0000 UTC m=+150.000690675 (delta=62.705402ms)
	I0725 17:47:30.420867   23738 fix.go:200] guest clock delta is within tolerance: 62.705402ms
	I0725 17:47:30.420874   23738 start.go:83] releasing machines lock for "ha-174036-m03", held for 24.421343893s
	I0725 17:47:30.420898   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.421209   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:30.424796   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.425218   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.425244   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.426980   23738 out.go:177] * Found network options:
	I0725 17:47:30.428405   23738 out.go:177]   - NO_PROXY=192.168.39.165,192.168.39.197
	W0725 17:47:30.429737   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 17:47:30.429768   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:47:30.429787   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430386   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430612   23738 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:47:30.430731   23738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:47:30.430770   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	W0725 17:47:30.430824   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	W0725 17:47:30.430853   23738 proxy.go:119] fail to check proxy env: Error ip not in block
	I0725 17:47:30.430981   23738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:47:30.431009   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:47:30.433666   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.433923   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434113   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.434139   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434306   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:30.434333   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:30.434346   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.434531   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.434539   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:47:30.434681   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:47:30.434751   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.434825   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:47:30.434911   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.434968   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:47:30.665372   23738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:47:30.671008   23738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:47:30.671083   23738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:47:30.687466   23738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 17:47:30.687490   23738 start.go:495] detecting cgroup driver to use...
	I0725 17:47:30.687589   23738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:47:30.704846   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:47:30.718497   23738 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:47:30.718557   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:47:30.734205   23738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:47:30.747700   23738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:47:30.877079   23738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:47:31.022238   23738 docker.go:233] disabling docker service ...
	I0725 17:47:31.022307   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:47:31.035702   23738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:47:31.047950   23738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:47:31.168087   23738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:47:31.294928   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:47:31.308064   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:47:31.325628   23738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:47:31.325689   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.335135   23738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:47:31.335209   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.344896   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.354598   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.364175   23738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:47:31.374418   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.383970   23738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.400144   23738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:47:31.409589   23738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:47:31.418301   23738 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 17:47:31.418348   23738 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 17:47:31.429829   23738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:47:31.439026   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:31.567752   23738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:47:31.697089   23738 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:47:31.697150   23738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:47:31.701513   23738 start.go:563] Will wait 60s for crictl version
	I0725 17:47:31.701591   23738 ssh_runner.go:195] Run: which crictl
	I0725 17:47:31.705333   23738 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:47:31.744775   23738 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:47:31.744860   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:47:31.773053   23738 ssh_runner.go:195] Run: crio --version
	I0725 17:47:31.802779   23738 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:47:31.804281   23738 out.go:177]   - env NO_PROXY=192.168.39.165
	I0725 17:47:31.805566   23738 out.go:177]   - env NO_PROXY=192.168.39.165,192.168.39.197
	I0725 17:47:31.806678   23738 main.go:141] libmachine: (ha-174036-m03) Calling .GetIP
	I0725 17:47:31.809588   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:31.810014   23738 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:47:31.810040   23738 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:47:31.810252   23738 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:47:31.814039   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:47:31.826045   23738 mustload.go:65] Loading cluster: ha-174036
	I0725 17:47:31.826299   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:31.826543   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:31.826577   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:31.841041   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39655
	I0725 17:47:31.841482   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:31.841992   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:31.842016   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:31.842322   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:31.842497   23738 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:47:31.843997   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:47:31.844306   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:31.844362   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:31.859540   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33011
	I0725 17:47:31.859990   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:31.860424   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:31.860445   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:31.860735   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:31.861392   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:47:31.861548   23738 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.253
	I0725 17:47:31.861558   23738 certs.go:194] generating shared ca certs ...
	I0725 17:47:31.861570   23738 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.861695   23738 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:47:31.861732   23738 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:47:31.861739   23738 certs.go:256] generating profile certs ...
	I0725 17:47:31.861800   23738 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:47:31.861824   23738 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16
	I0725 17:47:31.861838   23738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.253 192.168.39.254]
	I0725 17:47:31.960154   23738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 ...
	I0725 17:47:31.960181   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16: {Name:mk567cb329724f7d5be3ef9d2ac018eed8def8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.960345   23738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16 ...
	I0725 17:47:31.960358   23738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16: {Name:mke962a58894b471ea02d085e827bcbcccbc3ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:47:31.960426   23738 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.d1562e16 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:47:31.960552   23738 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.d1562e16 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:47:31.960674   23738 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:47:31.960689   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:47:31.960700   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:47:31.960713   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:47:31.960725   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:47:31.960735   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:47:31.960747   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:47:31.960762   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:47:31.960774   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:47:31.960814   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:47:31.960840   23738 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:47:31.960849   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:47:31.960871   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:47:31.960891   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:47:31.960913   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:47:31.960949   23738 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:47:31.960974   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:31.960987   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:47:31.961001   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:47:31.961030   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:47:31.963724   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:31.964094   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:47:31.964124   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:31.964224   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:47:31.964431   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:47:31.964608   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:47:31.964727   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:47:32.040636   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0725 17:47:32.045700   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0725 17:47:32.058258   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0725 17:47:32.063158   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0725 17:47:32.073727   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0725 17:47:32.077838   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0725 17:47:32.089179   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0725 17:47:32.094387   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0725 17:47:32.106398   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0725 17:47:32.110441   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0725 17:47:32.120948   23738 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0725 17:47:32.128154   23738 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0725 17:47:32.140221   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:47:32.164723   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:47:32.187719   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:47:32.211502   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:47:32.233875   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0725 17:47:32.256196   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:47:32.277769   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:47:32.300660   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:47:32.323709   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:47:32.345708   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:47:32.367308   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:47:32.391779   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0725 17:47:32.406921   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0725 17:47:32.424209   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0725 17:47:32.440903   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0725 17:47:32.457558   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0725 17:47:32.472596   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0725 17:47:32.488844   23738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0725 17:47:32.504138   23738 ssh_runner.go:195] Run: openssl version
	I0725 17:47:32.509438   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:47:32.521083   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.525346   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.525408   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:47:32.531006   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:47:32.542348   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:47:32.553546   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.557553   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.557601   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:47:32.562820   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:47:32.574273   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:47:32.585053   23738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.589196   23738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.589258   23738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:47:32.594782   23738 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:47:32.605575   23738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:47:32.609467   23738 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 17:47:32.609519   23738 kubeadm.go:934] updating node {m03 192.168.39.253 8443 v1.30.3 crio true true} ...
	I0725 17:47:32.609604   23738 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:47:32.609635   23738 kube-vip.go:115] generating kube-vip config ...
	I0725 17:47:32.609672   23738 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:47:32.624865   23738 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:47:32.624956   23738 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:47:32.625018   23738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:47:32.635197   23738 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0725 17:47:32.635255   23738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0725 17:47:32.644188   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0725 17:47:32.644215   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:47:32.644275   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0725 17:47:32.644283   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0725 17:47:32.644297   23738 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0725 17:47:32.644336   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:47:32.644339   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:47:32.644647   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0725 17:47:32.649064   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0725 17:47:32.649088   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0725 17:47:32.676867   23738 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:47:32.676945   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0725 17:47:32.676971   23738 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0725 17:47:32.676984   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0725 17:47:32.719424   23738 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0725 17:47:32.719471   23738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0725 17:47:33.532625   23738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0725 17:47:33.541624   23738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 17:47:33.556660   23738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:47:33.574601   23738 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:47:33.591732   23738 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:47:33.595587   23738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 17:47:33.606947   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:33.733582   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:47:33.750463   23738 host.go:66] Checking if "ha-174036" exists ...
	I0725 17:47:33.750950   23738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:47:33.751005   23738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:47:33.769058   23738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0725 17:47:33.769470   23738 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:47:33.769932   23738 main.go:141] libmachine: Using API Version  1
	I0725 17:47:33.769953   23738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:47:33.770267   23738 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:47:33.770603   23738 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:47:33.770763   23738 start.go:317] joinCluster: &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:47:33.770881   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0725 17:47:33.770901   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:47:33.773973   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:33.774466   23738 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:47:33.774493   23738 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:47:33.774629   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:47:33.774802   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:47:33.774976   23738 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:47:33.775124   23738 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:47:33.929157   23738 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:33.929205   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qeias8.mpk1vfnxbq293g06 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443"
	I0725 17:47:57.541987   23738 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qeias8.mpk1vfnxbq293g06 --discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174036-m03 --control-plane --apiserver-advertise-address=192.168.39.253 --apiserver-bind-port=8443": (23.612751595s)
	I0725 17:47:57.542023   23738 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0725 17:47:58.157111   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174036-m03 minikube.k8s.io/updated_at=2024_07_25T17_47_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=ha-174036 minikube.k8s.io/primary=false
	I0725 17:47:58.334203   23738 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174036-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0725 17:47:58.447957   23738 start.go:319] duration metric: took 24.677188517s to joinCluster
	I0725 17:47:58.448028   23738 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 17:47:58.448412   23738 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:47:58.449023   23738 out.go:177] * Verifying Kubernetes components...
	I0725 17:47:58.450589   23738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:47:58.698316   23738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:47:58.718280   23738 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:47:58.718599   23738 kapi.go:59] client config for ha-174036: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0725 17:47:58.718695   23738 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0725 17:47:58.718886   23738 node_ready.go:35] waiting up to 6m0s for node "ha-174036-m03" to be "Ready" ...
	I0725 17:47:58.718969   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:58.718979   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:58.718990   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:58.718999   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:58.722618   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:59.219135   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:59.219158   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:59.219170   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:59.219174   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:59.222294   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:47:59.719212   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:47:59.719233   23738 round_trippers.go:469] Request Headers:
	I0725 17:47:59.719243   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:47:59.719249   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:47:59.722731   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.219445   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:00.219469   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:00.219477   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:00.219481   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:00.222884   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.719440   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:00.719467   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:00.719480   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:00.719491   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:00.723146   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:00.723842   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:01.219773   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:01.219795   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:01.219805   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:01.219811   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:01.223120   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:01.719063   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:01.719084   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:01.719091   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:01.719097   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:01.722342   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:02.219353   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:02.219372   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:02.219381   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:02.219387   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:02.222592   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:02.719391   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:02.719418   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:02.719429   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:02.719435   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:02.723071   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:03.220025   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:03.220046   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:03.220054   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:03.220057   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:03.224214   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:03.224899   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:03.719264   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:03.719283   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:03.719304   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:03.719309   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:03.722967   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:04.219329   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:04.219349   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:04.219357   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:04.219362   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:04.223058   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:04.719975   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:04.720000   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:04.720010   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:04.720018   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:04.730270   23738 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0725 17:48:05.220079   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:05.220100   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:05.220110   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:05.220115   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:05.223432   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:05.719932   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:05.719953   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:05.719962   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:05.719967   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:05.723272   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:05.723770   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:06.219856   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:06.219878   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:06.219885   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:06.219890   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:06.223221   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:06.719324   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:06.719348   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:06.719356   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:06.719360   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:06.722895   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:07.219728   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:07.219752   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:07.219763   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:07.219769   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:07.223364   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:07.719181   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:07.719204   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:07.719211   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:07.719214   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:07.722485   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:08.219257   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:08.219308   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:08.219321   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:08.219328   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:08.222606   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:08.223098   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:08.719258   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:08.719279   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:08.719303   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:08.719313   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:08.723115   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:09.219392   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:09.219411   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:09.219419   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:09.219427   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:09.222531   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:09.719588   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:09.719613   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:09.719623   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:09.719654   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:09.722904   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:10.219781   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:10.219803   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:10.219814   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:10.219821   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:10.222896   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:10.223460   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:10.719656   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:10.719675   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:10.719683   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:10.719687   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:10.723441   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:11.219804   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:11.219828   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:11.219838   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:11.219845   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:11.223106   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:11.720078   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:11.720096   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:11.720104   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:11.720109   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:11.723524   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:12.219524   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:12.219545   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:12.219554   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:12.219557   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:12.223317   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:12.224194   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:12.719753   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:12.719781   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:12.719795   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:12.719800   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:12.722724   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:13.219797   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:13.219822   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:13.219835   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:13.219840   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:13.222932   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:13.719110   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:13.719136   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:13.719147   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:13.719153   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:13.722461   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.219093   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:14.219119   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:14.219132   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:14.219137   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:14.222878   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.719715   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:14.719742   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:14.719750   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:14.719754   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:14.723507   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:14.724092   23738 node_ready.go:53] node "ha-174036-m03" has status "Ready":"False"
	I0725 17:48:15.219245   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:15.219262   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.219271   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.219275   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.222370   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:15.719932   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:15.719953   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.719961   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.719965   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.723368   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:15.723985   23738 node_ready.go:49] node "ha-174036-m03" has status "Ready":"True"
	I0725 17:48:15.724003   23738 node_ready.go:38] duration metric: took 17.005101402s for node "ha-174036-m03" to be "Ready" ...
	I0725 17:48:15.724011   23738 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:48:15.724074   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:15.724084   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.724091   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.724099   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.731583   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:15.738462   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.738534   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-flblg
	I0725 17:48:15.738543   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.738550   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.738553   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.741346   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.741863   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.741877   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.741884   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.741887   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.744242   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.744736   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.744751   23738 pod_ready.go:81] duration metric: took 6.267081ms for pod "coredns-7db6d8ff4d-flblg" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.744759   23738 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.744800   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vtr9p
	I0725 17:48:15.744807   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.744814   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.744821   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.746839   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.747321   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.747335   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.747345   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.747350   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.749391   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.749790   23738 pod_ready.go:92] pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.749807   23738 pod_ready.go:81] duration metric: took 5.041261ms for pod "coredns-7db6d8ff4d-vtr9p" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.749818   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.749878   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036
	I0725 17:48:15.749887   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.749893   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.749901   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.751999   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.752590   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:15.752601   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.752609   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.752612   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.755103   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.755910   23738 pod_ready.go:92] pod "etcd-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.755928   23738 pod_ready.go:81] duration metric: took 6.103409ms for pod "etcd-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.755945   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.755992   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m02
	I0725 17:48:15.755999   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.756006   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.756009   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.758199   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.758685   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:15.758698   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.758704   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.758713   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.760829   23738 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0725 17:48:15.761259   23738 pod_ready.go:92] pod "etcd-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:15.761272   23738 pod_ready.go:81] duration metric: took 5.317765ms for pod "etcd-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.761279   23738 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:15.920658   23738 request.go:629] Waited for 159.333662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m03
	I0725 17:48:15.920744   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174036-m03
	I0725 17:48:15.920750   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:15.920758   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:15.920764   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:15.924276   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.120649   23738 request.go:629] Waited for 195.365321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:16.120714   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:16.120722   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.120730   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.120736   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.124364   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.124865   23738 pod_ready.go:92] pod "etcd-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.124882   23738 pod_ready.go:81] duration metric: took 363.597449ms for pod "etcd-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.124897   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.321009   23738 request.go:629] Waited for 196.007507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:48:16.321059   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036
	I0725 17:48:16.321064   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.321070   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.321074   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.324418   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.520400   23738 request.go:629] Waited for 195.329584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:16.520449   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:16.520454   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.520482   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.520494   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.523544   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.524184   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.524207   23738 pod_ready.go:81] duration metric: took 399.30203ms for pod "kube-apiserver-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.524221   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.720217   23738 request.go:629] Waited for 195.919181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:48:16.720295   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m02
	I0725 17:48:16.720301   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.720309   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.720315   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.726447   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:48:16.920761   23738 request.go:629] Waited for 193.540089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:16.920846   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:16.920854   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:16.920862   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:16.920867   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:16.924432   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:16.925242   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:16.925261   23738 pod_ready.go:81] duration metric: took 401.032547ms for pod "kube-apiserver-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:16.925271   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.120383   23738 request.go:629] Waited for 195.022228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m03
	I0725 17:48:17.120469   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174036-m03
	I0725 17:48:17.120475   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.120482   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.120491   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.123936   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.319960   23738 request.go:629] Waited for 195.291804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:17.320011   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:17.320017   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.320024   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.320030   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.323598   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.324103   23738 pod_ready.go:92] pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:17.324120   23738 pod_ready.go:81] duration metric: took 398.839297ms for pod "kube-apiserver-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.324129   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.520346   23738 request.go:629] Waited for 196.124151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:48:17.520410   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036
	I0725 17:48:17.520416   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.520423   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.520427   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.523759   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:17.720985   23738 request.go:629] Waited for 196.496138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:17.721141   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:17.721156   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.721167   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.721178   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.728510   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:17.729883   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:17.729912   23738 pod_ready.go:81] duration metric: took 405.774903ms for pod "kube-controller-manager-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.729929   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:17.921047   23738 request.go:629] Waited for 191.006912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:48:17.921158   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m02
	I0725 17:48:17.921166   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:17.921175   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:17.921180   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:17.924660   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.120741   23738 request.go:629] Waited for 195.355142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:18.120823   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:18.120831   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.120839   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.120847   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.124807   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.125897   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.125917   23738 pod_ready.go:81] duration metric: took 395.981033ms for pod "kube-controller-manager-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.125928   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.320946   23738 request.go:629] Waited for 194.947565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m03
	I0725 17:48:18.321034   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174036-m03
	I0725 17:48:18.321045   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.321057   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.321065   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.325264   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:18.520737   23738 request.go:629] Waited for 194.3815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.520822   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.520832   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.520844   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.520853   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.524251   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.525009   23738 pod_ready.go:92] pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.525030   23738 pod_ready.go:81] duration metric: took 399.093257ms for pod "kube-controller-manager-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.525044   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5klkv" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.720032   23738 request.go:629] Waited for 194.926984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5klkv
	I0725 17:48:18.720105   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5klkv
	I0725 17:48:18.720111   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.720118   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.720122   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.723688   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:18.920621   23738 request.go:629] Waited for 196.358054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.920711   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:18.920718   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:18.920727   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:18.920734   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:18.924836   23738 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0725 17:48:18.925351   23738 pod_ready.go:92] pod "kube-proxy-5klkv" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:18.925371   23738 pod_ready.go:81] duration metric: took 400.32091ms for pod "kube-proxy-5klkv" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:18.925381   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.120398   23738 request.go:629] Waited for 194.943515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:48:19.120449   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6jdn
	I0725 17:48:19.120454   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.120463   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.120468   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.124001   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.320374   23738 request.go:629] Waited for 195.386277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:19.320450   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:19.320470   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.320486   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.320490   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.324195   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.324867   23738 pod_ready.go:92] pod "kube-proxy-s6jdn" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:19.324886   23738 pod_ready.go:81] duration metric: took 399.499786ms for pod "kube-proxy-s6jdn" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.324896   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.520953   23738 request.go:629] Waited for 195.983035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:48:19.521027   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xwvdm
	I0725 17:48:19.521034   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.521045   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.521055   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.524663   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.720661   23738 request.go:629] Waited for 195.346701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:19.720717   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:19.720724   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.720772   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.720782   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.723887   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:19.724496   23738 pod_ready.go:92] pod "kube-proxy-xwvdm" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:19.724518   23738 pod_ready.go:81] duration metric: took 399.615118ms for pod "kube-proxy-xwvdm" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.725022   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:19.920853   23738 request.go:629] Waited for 195.756105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:48:19.920931   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036
	I0725 17:48:19.920943   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:19.920958   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:19.920965   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:19.924401   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.120652   23738 request.go:629] Waited for 195.254606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:20.120715   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036
	I0725 17:48:20.120722   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.120731   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.120738   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.124100   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.124783   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.124804   23738 pod_ready.go:81] duration metric: took 399.766469ms for pod "kube-scheduler-ha-174036" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.124817   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.320407   23738 request.go:629] Waited for 195.516784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:48:20.320469   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m02
	I0725 17:48:20.320475   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.320483   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.320487   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.323906   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.520612   23738 request.go:629] Waited for 195.929751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:20.520695   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m02
	I0725 17:48:20.520719   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.520734   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.520745   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.524429   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.524924   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.524940   23738 pod_ready.go:81] duration metric: took 400.115378ms for pod "kube-scheduler-ha-174036-m02" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.524950   23738 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.720056   23738 request.go:629] Waited for 195.03201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m03
	I0725 17:48:20.720144   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174036-m03
	I0725 17:48:20.720156   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.720167   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.720176   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.723832   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.920081   23738 request.go:629] Waited for 195.035781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:20.920141   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-174036-m03
	I0725 17:48:20.920146   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.920154   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.920157   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.923772   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:20.924230   23738 pod_ready.go:92] pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace has status "Ready":"True"
	I0725 17:48:20.924249   23738 pod_ready.go:81] duration metric: took 399.291088ms for pod "kube-scheduler-ha-174036-m03" in "kube-system" namespace to be "Ready" ...
	I0725 17:48:20.924263   23738 pod_ready.go:38] duration metric: took 5.200241533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 17:48:20.924283   23738 api_server.go:52] waiting for apiserver process to appear ...
	I0725 17:48:20.924365   23738 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 17:48:20.939968   23738 api_server.go:72] duration metric: took 22.491903115s to wait for apiserver process to appear ...
	I0725 17:48:20.940000   23738 api_server.go:88] waiting for apiserver healthz status ...
	I0725 17:48:20.940023   23738 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0725 17:48:20.945387   23738 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0725 17:48:20.945467   23738 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0725 17:48:20.945476   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:20.945483   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:20.945490   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:20.946577   23738 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0725 17:48:20.946636   23738 api_server.go:141] control plane version: v1.30.3
	I0725 17:48:20.946649   23738 api_server.go:131] duration metric: took 6.642298ms to wait for apiserver health ...
	I0725 17:48:20.946657   23738 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 17:48:21.120465   23738 request.go:629] Waited for 173.750714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.120533   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.120552   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.120564   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.120577   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.127448   23738 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0725 17:48:21.134655   23738 system_pods.go:59] 24 kube-system pods found
	I0725 17:48:21.134690   23738 system_pods.go:61] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:48:21.134697   23738 system_pods.go:61] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:48:21.134702   23738 system_pods.go:61] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:48:21.134707   23738 system_pods.go:61] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:48:21.134712   23738 system_pods.go:61] "etcd-ha-174036-m03" [512972cb-1314-4a63-bbd7-2737a4338be3] Running
	I0725 17:48:21.134716   23738 system_pods.go:61] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:48:21.134721   23738 system_pods.go:61] "kindnet-fcznc" [795e29b8-1fad-47ca-bc4e-0809d4063a10] Running
	I0725 17:48:21.134725   23738 system_pods.go:61] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:48:21.134731   23738 system_pods.go:61] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:48:21.134735   23738 system_pods.go:61] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:48:21.134741   23738 system_pods.go:61] "kube-apiserver-ha-174036-m03" [08ade854-8ac6-45b0-a876-ca62d31c9382] Running
	I0725 17:48:21.134747   23738 system_pods.go:61] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:48:21.134758   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:48:21.134763   23738 system_pods.go:61] "kube-controller-manager-ha-174036-m03" [e742a05b-ae60-4e7a-9f16-d7a9555423d5] Running
	I0725 17:48:21.134770   23738 system_pods.go:61] "kube-proxy-5klkv" [cc83bed2-4af8-4de2-ac28-f9b62e75297b] Running
	I0725 17:48:21.134775   23738 system_pods.go:61] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:48:21.134783   23738 system_pods.go:61] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:48:21.134789   23738 system_pods.go:61] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:48:21.134797   23738 system_pods.go:61] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:48:21.134802   23738 system_pods.go:61] "kube-scheduler-ha-174036-m03" [a922c6b3-064b-48e7-b43c-5d46df954b5c] Running
	I0725 17:48:21.134809   23738 system_pods.go:61] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:48:21.134813   23738 system_pods.go:61] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:48:21.134820   23738 system_pods.go:61] "kube-vip-ha-174036-m03" [ca677d83-2054-428e-aa5c-d95b15a57e1d] Running
	I0725 17:48:21.134825   23738 system_pods.go:61] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:48:21.134834   23738 system_pods.go:74] duration metric: took 188.171619ms to wait for pod list to return data ...
	I0725 17:48:21.134846   23738 default_sa.go:34] waiting for default service account to be created ...
	I0725 17:48:21.320278   23738 request.go:629] Waited for 185.344351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:48:21.320366   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0725 17:48:21.320374   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.320384   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.320394   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.323682   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:21.323783   23738 default_sa.go:45] found service account: "default"
	I0725 17:48:21.323797   23738 default_sa.go:55] duration metric: took 188.941633ms for default service account to be created ...
	I0725 17:48:21.323805   23738 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 17:48:21.520189   23738 request.go:629] Waited for 196.302125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.520260   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0725 17:48:21.520270   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.520277   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.520284   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.527636   23738 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0725 17:48:21.533809   23738 system_pods.go:86] 24 kube-system pods found
	I0725 17:48:21.533834   23738 system_pods.go:89] "coredns-7db6d8ff4d-flblg" [94857bc1-d7ba-466b-91d7-e2d5041159f2] Running
	I0725 17:48:21.533839   23738 system_pods.go:89] "coredns-7db6d8ff4d-vtr9p" [fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a] Running
	I0725 17:48:21.533843   23738 system_pods.go:89] "etcd-ha-174036" [4e6a0109-6d8e-406e-9ca5-b7190bf72eab] Running
	I0725 17:48:21.533848   23738 system_pods.go:89] "etcd-ha-174036-m02" [9d58a70b-3891-441c-8eb8-214121847c63] Running
	I0725 17:48:21.533852   23738 system_pods.go:89] "etcd-ha-174036-m03" [512972cb-1314-4a63-bbd7-2737a4338be3] Running
	I0725 17:48:21.533856   23738 system_pods.go:89] "kindnet-2c2n8" [c8ed79cb-52d7-4dfa-a3a0-02329169d86c] Running
	I0725 17:48:21.533860   23738 system_pods.go:89] "kindnet-fcznc" [795e29b8-1fad-47ca-bc4e-0809d4063a10] Running
	I0725 17:48:21.533864   23738 system_pods.go:89] "kindnet-k4d8x" [3a912cbb-8702-492d-8316-258aedbd053d] Running
	I0725 17:48:21.533869   23738 system_pods.go:89] "kube-apiserver-ha-174036" [efbc55c3-c762-4d38-9602-02454c1ce8f4] Running
	I0725 17:48:21.533873   23738 system_pods.go:89] "kube-apiserver-ha-174036-m02" [2c704058-60b1-43fb-bde8-a5f7d8a9bc4f] Running
	I0725 17:48:21.533877   23738 system_pods.go:89] "kube-apiserver-ha-174036-m03" [08ade854-8ac6-45b0-a876-ca62d31c9382] Running
	I0725 17:48:21.533881   23738 system_pods.go:89] "kube-controller-manager-ha-174036" [bfdd16fe-f72f-4f8f-b175-b78d7dec78bb] Running
	I0725 17:48:21.533889   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m02" [b181bc10-9572-43f4-9242-6e0676abdc64] Running
	I0725 17:48:21.533893   23738 system_pods.go:89] "kube-controller-manager-ha-174036-m03" [e742a05b-ae60-4e7a-9f16-d7a9555423d5] Running
	I0725 17:48:21.533899   23738 system_pods.go:89] "kube-proxy-5klkv" [cc83bed2-4af8-4de2-ac28-f9b62e75297b] Running
	I0725 17:48:21.533903   23738 system_pods.go:89] "kube-proxy-s6jdn" [f13b463b-f7f9-4b49-8e29-209cb153a6e6] Running
	I0725 17:48:21.533909   23738 system_pods.go:89] "kube-proxy-xwvdm" [aa62fb5e-6304-40b4-aa20-190c9ee56057] Running
	I0725 17:48:21.533913   23738 system_pods.go:89] "kube-scheduler-ha-174036" [4174968d-5004-47e6-b8fa-8c9ab4720f09] Running
	I0725 17:48:21.533917   23738 system_pods.go:89] "kube-scheduler-ha-174036-m02" [81a367fa-3418-45a0-85ca-f20549a43a2e] Running
	I0725 17:48:21.533921   23738 system_pods.go:89] "kube-scheduler-ha-174036-m03" [a922c6b3-064b-48e7-b43c-5d46df954b5c] Running
	I0725 17:48:21.533927   23738 system_pods.go:89] "kube-vip-ha-174036" [2ce4bfe5-5441-4a28-889e-7743367f32b2] Running
	I0725 17:48:21.533930   23738 system_pods.go:89] "kube-vip-ha-174036-m02" [5a70af21-4ee6-4270-8e25-3b81618c629a] Running
	I0725 17:48:21.533935   23738 system_pods.go:89] "kube-vip-ha-174036-m03" [ca677d83-2054-428e-aa5c-d95b15a57e1d] Running
	I0725 17:48:21.533939   23738 system_pods.go:89] "storage-provisioner" [c9354422-69ff-4676-80d1-4940badf9b4e] Running
	I0725 17:48:21.533945   23738 system_pods.go:126] duration metric: took 210.135527ms to wait for k8s-apps to be running ...
	I0725 17:48:21.533953   23738 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 17:48:21.533995   23738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 17:48:21.550490   23738 system_svc.go:56] duration metric: took 16.524706ms WaitForService to wait for kubelet
	I0725 17:48:21.550515   23738 kubeadm.go:582] duration metric: took 23.102455476s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:48:21.550534   23738 node_conditions.go:102] verifying NodePressure condition ...
	I0725 17:48:21.720962   23738 request.go:629] Waited for 170.343393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0725 17:48:21.721016   23738 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0725 17:48:21.721021   23738 round_trippers.go:469] Request Headers:
	I0725 17:48:21.721029   23738 round_trippers.go:473]     Accept: application/json, */*
	I0725 17:48:21.721033   23738 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0725 17:48:21.724763   23738 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0725 17:48:21.725594   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725612   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725625   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725629   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725634   23738 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 17:48:21.725638   23738 node_conditions.go:123] node cpu capacity is 2
	I0725 17:48:21.725644   23738 node_conditions.go:105] duration metric: took 175.104521ms to run NodePressure ...
	I0725 17:48:21.725659   23738 start.go:241] waiting for startup goroutines ...
	I0725 17:48:21.725687   23738 start.go:255] writing updated cluster config ...
	I0725 17:48:21.725957   23738 ssh_runner.go:195] Run: rm -f paused
	I0725 17:48:21.779194   23738 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 17:48:21.781309   23738 out.go:177] * Done! kubectl is now configured to use "ha-174036" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.174121098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929973174098986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=881ce5a1-b0f7-4165-93dc-1ffa0fdbfa71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.174614699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43c25bb5-5403-4c64-9398-1c5be63b93e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.174678116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43c25bb5-5403-4c64-9398-1c5be63b93e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.174953697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43c25bb5-5403-4c64-9398-1c5be63b93e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.219003480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfe9e0a7-cf5f-4f16-8f03-fab82cd248d3 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.219089299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfe9e0a7-cf5f-4f16-8f03-fab82cd248d3 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.219990762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c9c36d7-1c4d-422d-b082-8e588d38284c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.220476991Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929973220453092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c9c36d7-1c4d-422d-b082-8e588d38284c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.221015012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a41a12d2-caff-473d-9982-4fb664c1da51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.221066004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a41a12d2-caff-473d-9982-4fb664c1da51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.221279714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a41a12d2-caff-473d-9982-4fb664c1da51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.263172249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1eae1e33-fef1-4281-b477-6628013fdc08 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.263245825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1eae1e33-fef1-4281-b477-6628013fdc08 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.265879384Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3692e187-df39-42a6-8d00-0aa3e40a46d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.266335993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929973266311163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3692e187-df39-42a6-8d00-0aa3e40a46d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.266969672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf6bd920-98d7-4253-be61-55fc2ff2bc03 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.267043567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf6bd920-98d7-4253-be61-55fc2ff2bc03 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.267377344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf6bd920-98d7-4253-be61-55fc2ff2bc03 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.303940451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88431d2f-4973-4b95-b5ab-819fe7342f9b name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.304008688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88431d2f-4973-4b95-b5ab-819fe7342f9b name=/runtime.v1.RuntimeService/Version
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.305057471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3cdfac7-d010-4b20-a3de-e039abd86eda name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.305722057Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721929973305696473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3cdfac7-d010-4b20-a3de-e039abd86eda name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.306301188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7421e924-802c-4f78-9f0c-2bc9f9cf9bf3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.306349882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7421e924-802c-4f78-9f0c-2bc9f9cf9bf3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:52:53 ha-174036 crio[682]: time="2024-07-25 17:52:53.306736042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721929705860259350,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571452334383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e,PodSandboxId:95f27d4d3811628521745f206fc71b1199b834cc198df0bca15269d0c0a5fafd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721929571394940607,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721929571379692343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba
2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1721929559436837253,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172192955
5002599079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0,PodSandboxId:d403d51e1490c2320c93a00c562f26f6f4e3da0bd721d3c74a9c87fd1aa30535,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17219295378
34324539,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb0a62f4a501312f477c94c22d0cf69,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd,PodSandboxId:9e04e99a376a1e1e62532e5b1714466bfe9df09c9f4cd099bef96b800c2cdba4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721929534723301504,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721929534688682857,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721929534601299061,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526,PodSandboxId:209f2e15348a23b82a2ac856c72c9fd980284e4c17607853220bab880677bcba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721929534554019533,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7421e924-802c-4f78-9f0c-2bc9f9cf9bf3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2bbb36d42911b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   c949824afb5f4       busybox-fc5497c4f-2mwrb
	0110c72f3cc1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   9bb7062a78b83       coredns-7db6d8ff4d-flblg
	35b4910d2dffd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   95f27d4d38116       storage-provisioner
	7faf8fe41b978       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   77a88d259037c       coredns-7db6d8ff4d-vtr9p
	fe8ee70c5b693       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   08e5a1f0a23d2       kindnet-2c2n8
	3afce6c1101d6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   c399536e97e26       kube-proxy-s6jdn
	a61b54c041838       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   d403d51e1490c       kube-vip-ha-174036
	0c7004ab2454d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   9e04e99a376a1       kube-apiserver-ha-174036
	5de803e0d40d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   18925eee7f455       etcd-ha-174036
	fe2d3acd60c40       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   792a8f45313d0       kube-scheduler-ha-174036
	26c724f452769       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   209f2e15348a2       kube-controller-manager-ha-174036
	
	
	==> coredns [0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f] <==
	[INFO] 10.244.1.2:48378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001981322s
	[INFO] 10.244.0.4:57743 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000237286s
	[INFO] 10.244.0.4:35821 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009454s
	[INFO] 10.244.0.4:56762 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015961s
	[INFO] 10.244.0.4:33710 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011041s
	[INFO] 10.244.0.4:39222 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091598s
	[INFO] 10.244.2.2:35849 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163406s
	[INFO] 10.244.2.2:58585 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001474924s
	[INFO] 10.244.2.2:43739 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099316s
	[INFO] 10.244.1.2:50301 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00197463s
	[INFO] 10.244.1.2:57934 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587617s
	[INFO] 10.244.1.2:46902 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144867s
	[INFO] 10.244.1.2:45033 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00024148s
	[INFO] 10.244.0.4:39933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007593s
	[INFO] 10.244.0.4:56548 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135774s
	[INFO] 10.244.2.2:37400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145773s
	[INFO] 10.244.2.2:35387 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00008288s
	[INFO] 10.244.2.2:51951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060263s
	[INFO] 10.244.0.4:35903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.0.4:47190 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168947s
	[INFO] 10.244.2.2:57705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000173851s
	[INFO] 10.244.1.2:46849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111229s
	[INFO] 10.244.1.2:45248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080498s
	[INFO] 10.244.1.2:34246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112642s
	[INFO] 10.244.1.2:60449 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082776s
	
	
	==> coredns [7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f] <==
	[INFO] 10.244.2.2:51239 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001453951s
	[INFO] 10.244.0.4:47955 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140363s
	[INFO] 10.244.0.4:47149 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003632272s
	[INFO] 10.244.0.4:50546 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003351953s
	[INFO] 10.244.2.2:39311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129834s
	[INFO] 10.244.2.2:46828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001959216s
	[INFO] 10.244.2.2:50785 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205115s
	[INFO] 10.244.2.2:60376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134751s
	[INFO] 10.244.2.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185565s
	[INFO] 10.244.1.2:33441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154369s
	[INFO] 10.244.1.2:48932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095106s
	[INFO] 10.244.1.2:57921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014197s
	[INFO] 10.244.1.2:36171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087145s
	[INFO] 10.244.0.4:34307 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088823s
	[INFO] 10.244.0.4:57061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114297s
	[INFO] 10.244.2.2:54914 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000215592s
	[INFO] 10.244.1.2:41895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148191s
	[INFO] 10.244.1.2:43543 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125877s
	[INFO] 10.244.1.2:60822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099959s
	[INFO] 10.244.1.2:55371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085133s
	[INFO] 10.244.0.4:60792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135863s
	[INFO] 10.244.0.4:34176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000198465s
	[INFO] 10.244.2.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196507s
	[INFO] 10.244.2.2:49323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179955s
	[INFO] 10.244.2.2:55358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098973s
	
	
	==> describe nodes <==
	Name:               ha-174036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:52:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:48:44 +0000   Thu, 25 Jul 2024 17:46:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-174036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1be020ed9784dbcb9721764c32b616e
	  System UUID:                a1be020e-d978-4dbc-b972-1764c32b616e
	  Boot ID:                    96d25b24-9958-4e84-b55d-0be006e0dab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2mwrb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-7db6d8ff4d-flblg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 coredns-7db6d8ff4d-vtr9p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m59s
	  kube-system                 etcd-ha-174036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-2c2n8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m59s
	  kube-system                 kube-apiserver-ha-174036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-controller-manager-ha-174036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-proxy-s6jdn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 kube-scheduler-ha-174036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-vip-ha-174036                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m58s  kube-proxy       
	  Normal  Starting                 7m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s  kubelet          Node ha-174036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet          Node ha-174036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet          Node ha-174036 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m59s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal  NodeReady                6m43s  kubelet          Node ha-174036 status is now: NodeReady
	  Normal  RegisteredNode           5m56s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal  RegisteredNode           4m42s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	
	
	Name:               ha-174036-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:49:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 17:48:42 +0000   Thu, 25 Jul 2024 17:50:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-174036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8093ac6d205c434d94cbb70f3b2823ae
	  System UUID:                8093ac6d-205c-434d-94cb-b70f3b2823ae
	  Boot ID:                    2e13db07-8ea1-42a3-acad-03ad7606d62e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wtxzv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-174036-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m11s
	  kube-system                 kindnet-k4d8x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m13s
	  kube-system                 kube-apiserver-ha-174036-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-controller-manager-ha-174036-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-xwvdm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-scheduler-ha-174036-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-174036-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s (x8 over 6m13s)  kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  NodeNotReady             2m37s                  node-controller  Node ha-174036-m02 status is now: NodeNotReady
	
	
	Name:               ha-174036-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_47_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:52:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:48:55 +0000   Thu, 25 Jul 2024 17:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-174036-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45503b4610a245398fdd1551d18f3934
	  System UUID:                45503b46-10a2-4539-8fdd-1551d18f3934
	  Boot ID:                    7ed5b409-9367-4265-9cd0-e00584c888dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qqdtg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-174036-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-fcznc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-ha-174036-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-174036-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-5klkv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-ha-174036-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-174036-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m59s (x2 over 4m59s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x2 over 4m59s)  kubelet          Node ha-174036-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x2 over 4m59s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal  NodeReady                4m38s                  kubelet          Node ha-174036-m03 status is now: NodeReady
	
	
	Name:               ha-174036-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_49_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:52:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:48:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:49:30 +0000   Thu, 25 Jul 2024 17:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-174036-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccffe731755d4ecfa1441a8d697922a2
	  System UUID:                ccffe731-755d-4ecf-a144-1a8d697922a2
	  Boot ID:                    52b58166-644f-492f-aee4-24a775481797
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvhcw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-cvcj9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-174036-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul25 17:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050092] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036800] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.842811] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.842958] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.777476] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056188] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.174852] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114710] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.260280] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.890204] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.211746] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064261] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251761] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.094069] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.327144] kauditd_printk_skb: 21 callbacks suppressed
	[Jul25 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +46.764801] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9] <==
	{"level":"warn","ts":"2024-07-25T17:52:53.552932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.559889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.56381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.576997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.58374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.590235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.594332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.597321Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.604954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.608573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.61128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.612153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.619543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.622413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.62521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.628223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.634174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.644597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.651987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.655095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.658675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.664505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.671892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.680531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:52:53.727631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:52:53 up 7 min,  0 users,  load average: 0.22, 0.31, 0.18
	Linux ha-174036 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad] <==
	I0725 17:52:20.452128       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:52:30.460402       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:52:30.460454       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:52:30.460694       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:52:30.460723       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:52:30.460847       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:52:30.460866       1 main.go:299] handling current node
	I0725 17:52:30.460886       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:52:30.460892       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:52:40.454921       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:52:40.454970       1 main.go:299] handling current node
	I0725 17:52:40.454984       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:52:40.454996       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:52:40.455211       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:52:40.455224       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:52:40.455291       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:52:40.455317       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:52:50.460178       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:52:50.460325       1 main.go:299] handling current node
	I0725 17:52:50.460362       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:52:50.460380       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:52:50.460581       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:52:50.460626       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:52:50.460726       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:52:50.460747       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd] <==
	I0725 17:45:40.873250       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0725 17:45:40.884191       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 17:45:54.365532       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0725 17:45:54.366410       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0725 17:47:55.506264       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0725 17:47:55.506349       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0725 17:47:55.506391       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.079µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0725 17:47:55.507645       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0725 17:47:55.507913       1 timeout.go:142] post-timeout activity - time-elapsed: 1.867844ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0725 17:48:27.240625       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44802: use of closed network connection
	E0725 17:48:27.430926       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44818: use of closed network connection
	E0725 17:48:27.618221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44840: use of closed network connection
	E0725 17:48:27.812599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35156: use of closed network connection
	E0725 17:48:27.997664       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35176: use of closed network connection
	E0725 17:48:28.186932       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35184: use of closed network connection
	E0725 17:48:28.365554       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35206: use of closed network connection
	E0725 17:48:28.541904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35228: use of closed network connection
	E0725 17:48:28.727648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35240: use of closed network connection
	E0725 17:48:29.022530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35264: use of closed network connection
	E0725 17:48:29.210094       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35286: use of closed network connection
	E0725 17:48:29.384014       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35304: use of closed network connection
	E0725 17:48:29.573048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35324: use of closed network connection
	E0725 17:48:29.749375       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35338: use of closed network connection
	E0725 17:48:29.927249       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35366: use of closed network connection
	W0725 17:49:59.414080       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.253]
	
	
	==> kube-controller-manager [26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526] <==
	I0725 17:48:22.726654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.949µs"
	I0725 17:48:22.738601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.57µs"
	I0725 17:48:22.747376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.439µs"
	I0725 17:48:22.846392       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.452072ms"
	I0725 17:48:23.045407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.789744ms"
	E0725 17:48:23.045453       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:48:23.045563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.763µs"
	I0725 17:48:23.054219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="209.155µs"
	I0725 17:48:24.566087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.296µs"
	I0725 17:48:26.074925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.573644ms"
	I0725 17:48:26.075089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.274µs"
	I0725 17:48:26.500352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.838482ms"
	I0725 17:48:26.500534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.371µs"
	I0725 17:48:26.781513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.454496ms"
	I0725 17:48:26.781832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.791µs"
	E0725 17:48:59.136629       1 certificate_controller.go:146] Sync csr-t6bk2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-t6bk2": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:48:59.424658       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-174036-m04\" does not exist"
	I0725 17:48:59.449172       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174036-m04" podCIDRs=["10.244.3.0/24"]
	E0725 17:48:59.623621       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"43516f2f-60db-4965-95c3-016e6e19e643", ResourceVersion:"914", Generation:1, CreationTimestamp:time.Date(2024, time.July, 25, 17, 45, 41, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\
":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240719-e7903573\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostP
ath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001fd2260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", Vo
lumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1
.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eaf8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Down
wardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00257eb10), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.IS
CSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Containe
r{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240719-e7903573", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001fd2280)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001fd22c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:res
ource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(
*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc002904660), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002841ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001e9b200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, H
ostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00284e7b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00287c040)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on
daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:49:04.284449       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174036-m04"
	I0725 17:49:20.223473       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174036-m04"
	I0725 17:50:16.736252       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174036-m04"
	I0725 17:50:16.781874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.394571ms"
	I0725 17:50:16.781990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.729µs"
	
	
	==> kube-proxy [3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136] <==
	I0725 17:45:55.265656       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:45:55.282546       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0725 17:45:55.319637       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:45:55.319680       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:45:55.319698       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:45:55.322168       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:45:55.322638       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:45:55.322679       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:45:55.324284       1 config.go:192] "Starting service config controller"
	I0725 17:45:55.324455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:45:55.324496       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:45:55.324512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:45:55.325124       1 config.go:319] "Starting node config controller"
	I0725 17:45:55.325176       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:45:55.425218       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:45:55.425237       1 shared_informer.go:320] Caches are synced for node config
	I0725 17:45:55.425360       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002] <==
	W0725 17:45:38.783465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:45:38.783504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:45:38.788579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 17:45:38.788624       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 17:45:38.881489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 17:45:38.881533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:45:38.900961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:45:38.901040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:45:38.929232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:45:38.929278       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:45:38.941407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:45:38.941461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0725 17:45:40.791158       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 17:47:54.826283       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5klkv\": pod kube-proxy-5klkv is already assigned to node \"ha-174036-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5klkv" node="ha-174036-m03"
	E0725 17:47:54.828928       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cc83bed2-4af8-4de2-ac28-f9b62e75297b(kube-system/kube-proxy-5klkv) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5klkv"
	E0725 17:47:54.829145       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5klkv\": pod kube-proxy-5klkv is already assigned to node \"ha-174036-m03\"" pod="kube-system/kube-proxy-5klkv"
	I0725 17:47:54.829246       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5klkv" node="ha-174036-m03"
	E0725 17:48:22.692298       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wtxzv\": pod busybox-fc5497c4f-wtxzv is already assigned to node \"ha-174036-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wtxzv" node="ha-174036-m02"
	E0725 17:48:22.692366       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 93b566c1-d54b-4740-a5ce-777a73656d9a(default/busybox-fc5497c4f-wtxzv) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wtxzv"
	E0725 17:48:22.692380       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wtxzv\": pod busybox-fc5497c4f-wtxzv is already assigned to node \"ha-174036-m02\"" pod="default/busybox-fc5497c4f-wtxzv"
	I0725 17:48:22.692408       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wtxzv" node="ha-174036-m02"
	E0725 17:48:59.526924       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bvhcw\": pod kindnet-bvhcw is already assigned to node \"ha-174036-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bvhcw" node="ha-174036-m04"
	E0725 17:48:59.527112       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3353f0f7-eee0-42c7-aaef-d495f721b520(kube-system/kindnet-bvhcw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bvhcw"
	E0725 17:48:59.527151       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bvhcw\": pod kindnet-bvhcw is already assigned to node \"ha-174036-m04\"" pod="kube-system/kindnet-bvhcw"
	I0725 17:48:59.527190       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bvhcw" node="ha-174036-m04"
	
	
	==> kubelet <==
	Jul 25 17:48:40 ha-174036 kubelet[1362]: E0725 17:48:40.851156    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:48:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:48:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:49:40 ha-174036 kubelet[1362]: E0725 17:49:40.850905    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:49:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:49:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:50:40 ha-174036 kubelet[1362]: E0725 17:50:40.851812    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:50:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:50:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:51:40 ha-174036 kubelet[1362]: E0725 17:51:40.850553    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:51:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:51:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:52:40 ha-174036 kubelet[1362]: E0725 17:52:40.852875    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:52:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:52:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:52:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:52:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174036 -n ha-174036
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174036 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-174036 -v=7 --alsologtostderr
E0725 17:54:12.056626   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:54:39.739508   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-174036 -v=7 --alsologtostderr: exit status 82 (2m1.775653565s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174036-m04"  ...
	* Stopping node "ha-174036-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:52:55.113698   29503 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:52:55.113797   29503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:55.113805   29503 out.go:304] Setting ErrFile to fd 2...
	I0725 17:52:55.113809   29503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:52:55.113992   29503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:52:55.114197   29503 out.go:298] Setting JSON to false
	I0725 17:52:55.114273   29503 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:55.114626   29503 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:55.114707   29503 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:52:55.114868   29503 mustload.go:65] Loading cluster: ha-174036
	I0725 17:52:55.114996   29503 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:52:55.115023   29503 stop.go:39] StopHost: ha-174036-m04
	I0725 17:52:55.115386   29503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:55.115432   29503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:55.131993   29503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0725 17:52:55.132532   29503 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:55.133102   29503 main.go:141] libmachine: Using API Version  1
	I0725 17:52:55.133124   29503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:55.133456   29503 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:55.135919   29503 out.go:177] * Stopping node "ha-174036-m04"  ...
	I0725 17:52:55.137651   29503 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 17:52:55.137701   29503 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 17:52:55.137947   29503 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 17:52:55.137976   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 17:52:55.141216   29503 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:55.141751   29503 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:48:44 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 17:52:55.141788   29503 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 17:52:55.142006   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 17:52:55.142208   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 17:52:55.142380   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 17:52:55.142634   29503 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 17:52:55.227985   29503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 17:52:55.282209   29503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 17:52:55.334770   29503 main.go:141] libmachine: Stopping "ha-174036-m04"...
	I0725 17:52:55.334796   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:55.336474   29503 main.go:141] libmachine: (ha-174036-m04) Calling .Stop
	I0725 17:52:55.340051   29503 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 0/120
	I0725 17:52:56.421521   29503 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 17:52:56.422826   29503 main.go:141] libmachine: Machine "ha-174036-m04" was stopped.
	I0725 17:52:56.422843   29503 stop.go:75] duration metric: took 1.285196918s to stop
	I0725 17:52:56.422874   29503 stop.go:39] StopHost: ha-174036-m03
	I0725 17:52:56.423155   29503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:52:56.423193   29503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:52:56.438592   29503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0725 17:52:56.438938   29503 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:52:56.439444   29503 main.go:141] libmachine: Using API Version  1
	I0725 17:52:56.439471   29503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:52:56.439768   29503 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:52:56.441698   29503 out.go:177] * Stopping node "ha-174036-m03"  ...
	I0725 17:52:56.442778   29503 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 17:52:56.442812   29503 main.go:141] libmachine: (ha-174036-m03) Calling .DriverName
	I0725 17:52:56.443039   29503 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 17:52:56.443061   29503 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHHostname
	I0725 17:52:56.446186   29503 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:56.446611   29503 main.go:141] libmachine: (ha-174036-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:8c:91", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:47:20 +0000 UTC Type:0 Mac:52:54:00:44:8c:91 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-174036-m03 Clientid:01:52:54:00:44:8c:91}
	I0725 17:52:56.446645   29503 main.go:141] libmachine: (ha-174036-m03) DBG | domain ha-174036-m03 has defined IP address 192.168.39.253 and MAC address 52:54:00:44:8c:91 in network mk-ha-174036
	I0725 17:52:56.446786   29503 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHPort
	I0725 17:52:56.446947   29503 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHKeyPath
	I0725 17:52:56.447092   29503 main.go:141] libmachine: (ha-174036-m03) Calling .GetSSHUsername
	I0725 17:52:56.447219   29503 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m03/id_rsa Username:docker}
	I0725 17:52:56.526359   29503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 17:52:56.578862   29503 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 17:52:56.632656   29503 main.go:141] libmachine: Stopping "ha-174036-m03"...
	I0725 17:52:56.632688   29503 main.go:141] libmachine: (ha-174036-m03) Calling .GetState
	I0725 17:52:56.634296   29503 main.go:141] libmachine: (ha-174036-m03) Calling .Stop
	I0725 17:52:56.638210   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 0/120
	I0725 17:52:57.639633   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 1/120
	I0725 17:52:58.641339   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 2/120
	I0725 17:52:59.642716   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 3/120
	I0725 17:53:00.644358   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 4/120
	I0725 17:53:01.646755   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 5/120
	I0725 17:53:02.649186   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 6/120
	I0725 17:53:03.650978   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 7/120
	I0725 17:53:04.653064   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 8/120
	I0725 17:53:05.654350   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 9/120
	I0725 17:53:06.656678   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 10/120
	I0725 17:53:07.658068   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 11/120
	I0725 17:53:08.659967   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 12/120
	I0725 17:53:09.661728   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 13/120
	I0725 17:53:10.663200   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 14/120
	I0725 17:53:11.664662   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 15/120
	I0725 17:53:12.665969   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 16/120
	I0725 17:53:13.667497   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 17/120
	I0725 17:53:14.669077   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 18/120
	I0725 17:53:15.670715   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 19/120
	I0725 17:53:16.672598   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 20/120
	I0725 17:53:17.674168   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 21/120
	I0725 17:53:18.675327   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 22/120
	I0725 17:53:19.676964   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 23/120
	I0725 17:53:20.678964   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 24/120
	I0725 17:53:21.681050   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 25/120
	I0725 17:53:22.683212   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 26/120
	I0725 17:53:23.684890   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 27/120
	I0725 17:53:24.687257   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 28/120
	I0725 17:53:25.688571   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 29/120
	I0725 17:53:26.690230   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 30/120
	I0725 17:53:27.691967   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 31/120
	I0725 17:53:28.693527   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 32/120
	I0725 17:53:29.695085   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 33/120
	I0725 17:53:30.696772   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 34/120
	I0725 17:53:31.698632   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 35/120
	I0725 17:53:32.699955   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 36/120
	I0725 17:53:33.701264   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 37/120
	I0725 17:53:34.703422   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 38/120
	I0725 17:53:35.704830   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 39/120
	I0725 17:53:36.706619   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 40/120
	I0725 17:53:37.708012   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 41/120
	I0725 17:53:38.709392   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 42/120
	I0725 17:53:39.711134   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 43/120
	I0725 17:53:40.712368   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 44/120
	I0725 17:53:41.714165   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 45/120
	I0725 17:53:42.715831   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 46/120
	I0725 17:53:43.717281   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 47/120
	I0725 17:53:44.719234   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 48/120
	I0725 17:53:45.720684   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 49/120
	I0725 17:53:46.722445   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 50/120
	I0725 17:53:47.723984   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 51/120
	I0725 17:53:48.725611   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 52/120
	I0725 17:53:49.727247   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 53/120
	I0725 17:53:50.728632   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 54/120
	I0725 17:53:51.730354   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 55/120
	I0725 17:53:52.731750   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 56/120
	I0725 17:53:53.733259   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 57/120
	I0725 17:53:54.734881   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 58/120
	I0725 17:53:55.736247   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 59/120
	I0725 17:53:56.738090   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 60/120
	I0725 17:53:57.739516   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 61/120
	I0725 17:53:58.741731   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 62/120
	I0725 17:53:59.742960   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 63/120
	I0725 17:54:00.744094   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 64/120
	I0725 17:54:01.745969   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 65/120
	I0725 17:54:02.747180   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 66/120
	I0725 17:54:03.748752   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 67/120
	I0725 17:54:04.750210   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 68/120
	I0725 17:54:05.751536   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 69/120
	I0725 17:54:06.753651   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 70/120
	I0725 17:54:07.755072   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 71/120
	I0725 17:54:08.756587   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 72/120
	I0725 17:54:09.758854   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 73/120
	I0725 17:54:10.760145   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 74/120
	I0725 17:54:11.761984   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 75/120
	I0725 17:54:12.763504   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 76/120
	I0725 17:54:13.764877   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 77/120
	I0725 17:54:14.766298   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 78/120
	I0725 17:54:15.767473   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 79/120
	I0725 17:54:16.769331   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 80/120
	I0725 17:54:17.770795   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 81/120
	I0725 17:54:18.772342   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 82/120
	I0725 17:54:19.774419   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 83/120
	I0725 17:54:20.776994   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 84/120
	I0725 17:54:21.779115   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 85/120
	I0725 17:54:22.780988   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 86/120
	I0725 17:54:23.783272   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 87/120
	I0725 17:54:24.784758   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 88/120
	I0725 17:54:25.786954   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 89/120
	I0725 17:54:26.788688   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 90/120
	I0725 17:54:27.790965   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 91/120
	I0725 17:54:28.792540   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 92/120
	I0725 17:54:29.794280   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 93/120
	I0725 17:54:30.796006   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 94/120
	I0725 17:54:31.798127   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 95/120
	I0725 17:54:32.799711   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 96/120
	I0725 17:54:33.801166   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 97/120
	I0725 17:54:34.803515   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 98/120
	I0725 17:54:35.804868   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 99/120
	I0725 17:54:36.806553   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 100/120
	I0725 17:54:37.808100   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 101/120
	I0725 17:54:38.809653   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 102/120
	I0725 17:54:39.810931   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 103/120
	I0725 17:54:40.812285   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 104/120
	I0725 17:54:41.813988   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 105/120
	I0725 17:54:42.815576   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 106/120
	I0725 17:54:43.816984   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 107/120
	I0725 17:54:44.818999   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 108/120
	I0725 17:54:45.821073   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 109/120
	I0725 17:54:46.822932   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 110/120
	I0725 17:54:47.824172   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 111/120
	I0725 17:54:48.825811   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 112/120
	I0725 17:54:49.827127   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 113/120
	I0725 17:54:50.828536   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 114/120
	I0725 17:54:51.830506   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 115/120
	I0725 17:54:52.831970   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 116/120
	I0725 17:54:53.833499   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 117/120
	I0725 17:54:54.834967   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 118/120
	I0725 17:54:55.836277   29503 main.go:141] libmachine: (ha-174036-m03) Waiting for machine to stop 119/120
	I0725 17:54:56.837219   29503 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 17:54:56.837274   29503 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0725 17:54:56.838992   29503 out.go:177] 
	W0725 17:54:56.840355   29503 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0725 17:54:56.840379   29503 out.go:239] * 
	* 
	W0725 17:54:56.842584   29503 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 17:54:56.844960   29503 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-174036 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174036 --wait=true -v=7 --alsologtostderr
E0725 17:56:58.590390   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:58:21.637544   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:59:12.056146   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-174036 --wait=true -v=7 --alsologtostderr: (4m57.615783274s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174036
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174036 -n ha-174036
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174036 logs -n 25: (1.777798531s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m04 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp testdata/cp-test.txt                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m04_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03:/home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m03 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-174036 node stop m02 -v=7                                                    | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-174036 node start m02 -v=7                                                   | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-174036 -v=7                                                          | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-174036 -v=7                                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-174036 --wait=true -v=7                                                   | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:54 UTC | 25 Jul 24 17:59 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-174036                                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:59 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:54:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:54:56.890131   29986 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:54:56.890498   29986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:54:56.890534   29986 out.go:304] Setting ErrFile to fd 2...
	I0725 17:54:56.890542   29986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:54:56.890982   29986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:54:56.891757   29986 out.go:298] Setting JSON to false
	I0725 17:54:56.892759   29986 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2241,"bootTime":1721927856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:54:56.892818   29986 start.go:139] virtualization: kvm guest
	I0725 17:54:56.894739   29986 out.go:177] * [ha-174036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:54:56.896670   29986 notify.go:220] Checking for updates...
	I0725 17:54:56.896755   29986 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:54:56.898451   29986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:54:56.899800   29986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:54:56.901034   29986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:54:56.902363   29986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:54:56.903836   29986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:54:56.905583   29986 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:54:56.905701   29986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:54:56.906142   29986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:54:56.906206   29986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:54:56.920947   29986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0725 17:54:56.921435   29986 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:54:56.922104   29986 main.go:141] libmachine: Using API Version  1
	I0725 17:54:56.922147   29986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:54:56.922436   29986 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:54:56.922615   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.957254   29986 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 17:54:56.958789   29986 start.go:297] selected driver: kvm2
	I0725 17:54:56.958805   29986 start.go:901] validating driver "kvm2" against &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:54:56.958939   29986 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:54:56.959269   29986 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:54:56.959356   29986 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:54:56.973916   29986 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:54:56.974535   29986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:54:56.974567   29986 cni.go:84] Creating CNI manager for ""
	I0725 17:54:56.974573   29986 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 17:54:56.974636   29986 start.go:340] cluster config:
	{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:54:56.974759   29986 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:54:56.976638   29986 out.go:177] * Starting "ha-174036" primary control-plane node in "ha-174036" cluster
	I0725 17:54:56.977873   29986 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:54:56.977910   29986 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:54:56.977917   29986 cache.go:56] Caching tarball of preloaded images
	I0725 17:54:56.978004   29986 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:54:56.978014   29986 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:54:56.978120   29986 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:54:56.978307   29986 start.go:360] acquireMachinesLock for ha-174036: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:54:56.978351   29986 start.go:364] duration metric: took 22.555µs to acquireMachinesLock for "ha-174036"
	I0725 17:54:56.978362   29986 start.go:96] Skipping create...Using existing machine configuration
	I0725 17:54:56.978369   29986 fix.go:54] fixHost starting: 
	I0725 17:54:56.978617   29986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:54:56.978644   29986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:54:56.992629   29986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0725 17:54:56.993031   29986 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:54:56.993426   29986 main.go:141] libmachine: Using API Version  1
	I0725 17:54:56.993444   29986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:54:56.993732   29986 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:54:56.993903   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.994034   29986 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:54:56.995687   29986 fix.go:112] recreateIfNeeded on ha-174036: state=Running err=<nil>
	W0725 17:54:56.995723   29986 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 17:54:56.997586   29986 out.go:177] * Updating the running kvm2 "ha-174036" VM ...
	I0725 17:54:56.998956   29986 machine.go:94] provisionDockerMachine start ...
	I0725 17:54:56.998977   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.999186   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.001597   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.001989   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.002014   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.002153   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.002330   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.002479   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.002596   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.002710   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.002882   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.002895   29986 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 17:54:57.113130   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:54:57.113158   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.113413   29986 buildroot.go:166] provisioning hostname "ha-174036"
	I0725 17:54:57.113447   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.113669   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.116172   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.116589   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.116618   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.116753   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.116913   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.117082   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.117195   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.117325   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.117471   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.117481   29986 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036 && echo "ha-174036" | sudo tee /etc/hostname
	I0725 17:54:57.241654   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:54:57.241682   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.244479   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.244878   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.244921   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.245050   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.245234   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.245409   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.245664   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.245879   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.246087   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.246110   29986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:54:57.352726   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:54:57.352760   29986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:54:57.352817   29986 buildroot.go:174] setting up certificates
	I0725 17:54:57.352831   29986 provision.go:84] configureAuth start
	I0725 17:54:57.352849   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.353105   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:54:57.355599   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.356035   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.356071   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.356189   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.358430   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.358756   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.358782   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.358921   29986 provision.go:143] copyHostCerts
	I0725 17:54:57.358950   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:54:57.358991   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:54:57.359004   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:54:57.359084   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:54:57.359241   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:54:57.359269   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:54:57.359278   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:54:57.359328   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:54:57.359405   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:54:57.359428   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:54:57.359438   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:54:57.359471   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:54:57.359547   29986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036 san=[127.0.0.1 192.168.39.165 ha-174036 localhost minikube]
	I0725 17:54:57.760045   29986 provision.go:177] copyRemoteCerts
	I0725 17:54:57.760105   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:54:57.760126   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.762693   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.763208   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.763237   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.763440   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.763671   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.763837   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.763994   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:54:57.846268   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:54:57.846335   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:54:57.870254   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:54:57.870338   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0725 17:54:57.893831   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:54:57.893894   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:54:57.916731   29986 provision.go:87] duration metric: took 563.886786ms to configureAuth
	I0725 17:54:57.916754   29986 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:54:57.916980   29986 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:54:57.917044   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.919914   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.920281   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.920430   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.920429   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.920626   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.920800   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.920920   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.921067   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.921259   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.921275   29986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:56:28.806229   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:56:28.806256   29986 machine.go:97] duration metric: took 1m31.807285741s to provisionDockerMachine
	I0725 17:56:28.806270   29986 start.go:293] postStartSetup for "ha-174036" (driver="kvm2")
	I0725 17:56:28.806281   29986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:56:28.806302   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:28.806674   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:56:28.806705   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:28.809878   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.810335   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:28.810356   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.810562   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:28.810793   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:28.810967   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:28.811151   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:28.895793   29986 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:56:28.900204   29986 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:56:28.900235   29986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:56:28.900299   29986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:56:28.900448   29986 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:56:28.900461   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:56:28.900549   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:56:28.909785   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:56:28.932696   29986 start.go:296] duration metric: took 126.413581ms for postStartSetup
	I0725 17:56:28.932737   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:28.933085   29986 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 17:56:28.933112   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:28.935586   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.935936   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:28.935960   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.936084   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:28.936314   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:28.936484   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:28.936646   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	W0725 17:56:29.018400   29986 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0725 17:56:29.018435   29986 fix.go:56] duration metric: took 1m32.040061634s for fixHost
	I0725 17:56:29.018460   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.021226   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.021641   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.021677   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.021848   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.022039   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.022183   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.022326   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.022518   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:56:29.022710   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:56:29.022728   29986 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:56:29.128817   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721930189.086470918
	
	I0725 17:56:29.128840   29986 fix.go:216] guest clock: 1721930189.086470918
	I0725 17:56:29.128850   29986 fix.go:229] Guest: 2024-07-25 17:56:29.086470918 +0000 UTC Remote: 2024-07-25 17:56:29.018444543 +0000 UTC m=+92.163296824 (delta=68.026375ms)
	I0725 17:56:29.128885   29986 fix.go:200] guest clock delta is within tolerance: 68.026375ms
	I0725 17:56:29.128891   29986 start.go:83] releasing machines lock for "ha-174036", held for 1m32.150534157s
	I0725 17:56:29.128914   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.129180   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:56:29.131987   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.132418   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.132444   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.132598   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133183   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133363   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133445   29986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:56:29.133494   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.133602   29986 ssh_runner.go:195] Run: cat /version.json
	I0725 17:56:29.133626   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.136139   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136206   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136660   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.136683   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136788   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.136809   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136952   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.137043   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.137090   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.137205   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.137270   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.137372   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.137422   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:29.137486   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:29.247305   29986 ssh_runner.go:195] Run: systemctl --version
	I0725 17:56:29.253279   29986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:56:29.416869   29986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:56:29.423428   29986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:56:29.423498   29986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:56:29.432445   29986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 17:56:29.432478   29986 start.go:495] detecting cgroup driver to use...
	I0725 17:56:29.432572   29986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:56:29.449047   29986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:56:29.463038   29986 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:56:29.463106   29986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:56:29.476136   29986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:56:29.490226   29986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:56:29.632955   29986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:56:29.777609   29986 docker.go:233] disabling docker service ...
	I0725 17:56:29.777673   29986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:56:29.793795   29986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:56:29.807156   29986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:56:29.952347   29986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:56:30.094138   29986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:56:30.107832   29986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:56:30.126830   29986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:56:30.126899   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.136794   29986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:56:30.136864   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.146631   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.156062   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.165586   29986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:56:30.175506   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.185304   29986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.195745   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.205402   29986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:56:30.214176   29986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:56:30.223439   29986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:56:30.365113   29986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:56:30.632242   29986 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:56:30.632309   29986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:56:30.636926   29986 start.go:563] Will wait 60s for crictl version
	I0725 17:56:30.636987   29986 ssh_runner.go:195] Run: which crictl
	I0725 17:56:30.640448   29986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:56:30.679023   29986 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:56:30.679104   29986 ssh_runner.go:195] Run: crio --version
	I0725 17:56:30.707687   29986 ssh_runner.go:195] Run: crio --version
	I0725 17:56:30.737034   29986 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:56:30.738601   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:56:30.741500   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:30.741957   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:30.741981   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:30.742234   29986 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:56:30.746742   29986 kubeadm.go:883] updating cluster {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:56:30.746864   29986 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:56:30.746916   29986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:56:30.791141   29986 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:56:30.791165   29986 crio.go:433] Images already preloaded, skipping extraction
	I0725 17:56:30.791227   29986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:56:30.824517   29986 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:56:30.824539   29986 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:56:30.824555   29986 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0725 17:56:30.824663   29986 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:56:30.824728   29986 ssh_runner.go:195] Run: crio config
	I0725 17:56:30.870313   29986 cni.go:84] Creating CNI manager for ""
	I0725 17:56:30.870333   29986 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 17:56:30.870342   29986 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:56:30.870367   29986 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174036 NodeName:ha-174036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:56:30.870482   29986 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:56:30.870501   29986 kube-vip.go:115] generating kube-vip config ...
	I0725 17:56:30.870540   29986 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:56:30.881615   29986 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:56:30.881728   29986 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:56:30.881788   29986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:56:30.890853   29986 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:56:30.890909   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 17:56:30.899778   29986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0725 17:56:30.915238   29986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:56:30.930576   29986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0725 17:56:30.946094   29986 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:56:30.965461   29986 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:56:30.969283   29986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:56:31.115237   29986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:56:31.130076   29986 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.165
	I0725 17:56:31.130098   29986 certs.go:194] generating shared ca certs ...
	I0725 17:56:31.130116   29986 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.130398   29986 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:56:31.130511   29986 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:56:31.130541   29986 certs.go:256] generating profile certs ...
	I0725 17:56:31.130661   29986 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:56:31.130702   29986 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643
	I0725 17:56:31.130729   29986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.253 192.168.39.254]
	I0725 17:56:31.252990   29986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 ...
	I0725 17:56:31.253026   29986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643: {Name:mkad08bfe7915fa1b928db9aa69060350dde447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.253204   29986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643 ...
	I0725 17:56:31.253217   29986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643: {Name:mkd35c3c45d71809ec73449c347505fd11f57b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.253297   29986 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:56:31.253454   29986 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:56:31.253592   29986 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:56:31.253607   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:56:31.253621   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:56:31.253636   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:56:31.253651   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:56:31.253667   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:56:31.253681   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:56:31.253695   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:56:31.253709   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:56:31.253765   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:56:31.253797   29986 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:56:31.253808   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:56:31.253837   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:56:31.253865   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:56:31.253889   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:56:31.253929   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:56:31.253963   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.253978   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.253997   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.254596   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:56:31.279525   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:56:31.302765   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:56:31.325924   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:56:31.351513   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 17:56:31.377394   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:56:31.402102   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:56:31.429122   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:56:31.455740   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:56:31.480119   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:56:31.505643   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:56:31.528647   29986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:56:31.545727   29986 ssh_runner.go:195] Run: openssl version
	I0725 17:56:31.551250   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:56:31.560995   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.565256   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.565310   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.572286   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:56:31.582523   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:56:31.592656   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.597140   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.597196   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.602702   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:56:31.611762   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:56:31.621989   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.626565   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.626615   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.632380   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:56:31.641783   29986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:56:31.646297   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 17:56:31.651920   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 17:56:31.657268   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 17:56:31.662599   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 17:56:31.668305   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 17:56:31.673635   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 17:56:31.678881   29986 kubeadm.go:392] StartCluster: {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:56:31.679014   29986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:56:31.679055   29986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:56:31.718553   29986 cri.go:89] found id: "b662566d7a9cf8ff4572a82511d312ee23d4e19da560a641b4a76bdfed62491b"
	I0725 17:56:31.718580   29986 cri.go:89] found id: "13d64c5040c5e5628b082b1e7381a1e5ec5af82efc473cc83c58123d5bfa0e72"
	I0725 17:56:31.718583   29986 cri.go:89] found id: "5cee1a84014a844d9abe8f83d86ff058ddfd1511faf129e93242f7d3c17cc425"
	I0725 17:56:31.718587   29986 cri.go:89] found id: "0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f"
	I0725 17:56:31.718590   29986 cri.go:89] found id: "35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e"
	I0725 17:56:31.718592   29986 cri.go:89] found id: "7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f"
	I0725 17:56:31.718595   29986 cri.go:89] found id: "fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad"
	I0725 17:56:31.718597   29986 cri.go:89] found id: "3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136"
	I0725 17:56:31.718599   29986 cri.go:89] found id: "a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0"
	I0725 17:56:31.718604   29986 cri.go:89] found id: "0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd"
	I0725 17:56:31.718606   29986 cri.go:89] found id: "5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9"
	I0725 17:56:31.718609   29986 cri.go:89] found id: "fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002"
	I0725 17:56:31.718612   29986 cri.go:89] found id: "26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526"
	I0725 17:56:31.718615   29986 cri.go:89] found id: ""
	I0725 17:56:31.718654   29986 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.138402807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930395138382566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0487fe12-d786-4928-9345-ad1dedf62661 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.138973178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932cd78a-aecd-422b-8fc9-aa624066afa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.139042370Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932cd78a-aecd-422b-8fc9-aa624066afa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.139439063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=932cd78a-aecd-422b-8fc9-aa624066afa9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.193130637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f806a3ef-2d92-48a8-9cf0-79c8c186ea60 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.193235834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f806a3ef-2d92-48a8-9cf0-79c8c186ea60 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.194862591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=639f0864-44ef-4cf8-943f-4baa45c5e453 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.195762244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930395195524663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=639f0864-44ef-4cf8-943f-4baa45c5e453 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.196596204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42d3df63-4bfb-4d9e-b28a-1264006e6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.196675104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42d3df63-4bfb-4d9e-b28a-1264006e6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.197132210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42d3df63-4bfb-4d9e-b28a-1264006e6e4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.248951151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a3b004e-acd1-47fa-ae3b-5f801f16b02e name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.249038457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a3b004e-acd1-47fa-ae3b-5f801f16b02e name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.250046820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=def57963-4a06-407d-bfc0-56948a7e9016 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.250489603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930395250464105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=def57963-4a06-407d-bfc0-56948a7e9016 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.251234320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d60bfdc-5b94-48f5-b437-871e3461a005 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.251303087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d60bfdc-5b94-48f5-b437-871e3461a005 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.251712270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d60bfdc-5b94-48f5-b437-871e3461a005 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.291902421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa9ef5d-0592-4dcd-9830-d6fbb7c49770 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.292000965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa9ef5d-0592-4dcd-9830-d6fbb7c49770 name=/runtime.v1.RuntimeService/Version
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.293287707Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0bdfe74-510c-487b-85d9-bb867afc67e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.294269979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930395294238359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0bdfe74-510c-487b-85d9-bb867afc67e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.295026468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b8b37a6-a1d8-43b5-bb6d-e4458a3d2722 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.295085199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b8b37a6-a1d8-43b5-bb6d-e4458a3d2722 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 17:59:55 ha-174036 crio[3675]: time="2024-07-25 17:59:55.295498098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b8b37a6-a1d8-43b5-bb6d-e4458a3d2722 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e6a8e16cbbcba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   964397aa7270b       storage-provisioner
	01492692f4a55       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   a3172a2fbe2ec       kube-apiserver-ha-174036
	693ed1ff9eb4b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Running             kube-controller-manager   2                   8941cb3be23d3       kube-controller-manager-ha-174036
	073042a011f3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   6cd3158807dfc       busybox-fc5497c4f-2mwrb
	274822bb9a65e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   964397aa7270b       storage-provisioner
	644aa08443079       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago        Running             kube-vip                  0                   3eeb4cb8ff52c       kube-vip-ha-174036
	c7df72b32c957       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago        Running             kube-proxy                1                   444d87aef2ed9       kube-proxy-s6jdn
	7a99835be7737       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago        Running             kindnet-cni               1                   b74c45a0fd7eb       kindnet-2c2n8
	e23512c087fe5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      3 minutes ago        Exited              kube-controller-manager   1                   8941cb3be23d3       kube-controller-manager-ha-174036
	3a314f987a525       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago        Exited              kube-apiserver            2                   a3172a2fbe2ec       kube-apiserver-ha-174036
	7bbd7762992c8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      3 minutes ago        Running             kube-scheduler            1                   db06f08e58b8d       kube-scheduler-ha-174036
	0a0824c06b1fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   f73c39a23d513       etcd-ha-174036
	e26472c1f859c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   89cb3bf14d473       coredns-7db6d8ff4d-vtr9p
	3a13f2f605cb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   e805190624769       coredns-7db6d8ff4d-flblg
	2bbb36d42911b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   c949824afb5f4       busybox-fc5497c4f-2mwrb
	0110c72f3cc1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   9bb7062a78b83       coredns-7db6d8ff4d-flblg
	7faf8fe41b978       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   77a88d259037c       coredns-7db6d8ff4d-vtr9p
	fe8ee70c5b693       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   08e5a1f0a23d2       kindnet-2c2n8
	3afce6c1101d6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   c399536e97e26       kube-proxy-s6jdn
	5de803e0d40d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   18925eee7f455       etcd-ha-174036
	fe2d3acd60c40       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   792a8f45313d0       kube-scheduler-ha-174036
	
	
	==> coredns [0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f] <==
	[INFO] 10.244.2.2:51951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060263s
	[INFO] 10.244.0.4:35903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.0.4:47190 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168947s
	[INFO] 10.244.2.2:57705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000173851s
	[INFO] 10.244.1.2:46849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111229s
	[INFO] 10.244.1.2:45248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080498s
	[INFO] 10.244.1.2:34246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112642s
	[INFO] 10.244.1.2:60449 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082776s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=9m38s&timeoutSeconds=578&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1839&timeout=9m50s&timeoutSeconds=590&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1834&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1834": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1834": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1839": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1839": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1854": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1854": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928] <==
	[INFO] plugin/kubernetes: Trace[802125844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:56:41.530) (total time: 10001ms):
	Trace[802125844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:56:51.531)
	Trace[802125844]: [10.001639272s] [10.001639272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1181883564]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:56:46.430) (total time: 10000ms):
	Trace[1181883564]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (17:56:56.431)
	Trace[1181883564]: [10.000956393s] [10.000956393s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f] <==
	[INFO] 10.244.2.2:46828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001959216s
	[INFO] 10.244.2.2:50785 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205115s
	[INFO] 10.244.2.2:60376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134751s
	[INFO] 10.244.2.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185565s
	[INFO] 10.244.1.2:33441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154369s
	[INFO] 10.244.1.2:48932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095106s
	[INFO] 10.244.1.2:57921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014197s
	[INFO] 10.244.1.2:36171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087145s
	[INFO] 10.244.0.4:34307 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088823s
	[INFO] 10.244.0.4:57061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114297s
	[INFO] 10.244.2.2:54914 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000215592s
	[INFO] 10.244.1.2:41895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148191s
	[INFO] 10.244.1.2:43543 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125877s
	[INFO] 10.244.1.2:60822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099959s
	[INFO] 10.244.1.2:55371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085133s
	[INFO] 10.244.0.4:60792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135863s
	[INFO] 10.244.0.4:34176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000198465s
	[INFO] 10.244.2.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196507s
	[INFO] 10.244.2.2:49323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179955s
	[INFO] 10.244.2.2:55358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098973s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58666->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58666->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-174036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:57:19 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:57:19 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:57:19 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:57:19 +0000   Thu, 25 Jul 2024 17:46:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-174036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1be020ed9784dbcb9721764c32b616e
	  System UUID:                a1be020e-d978-4dbc-b972-1764c32b616e
	  Boot ID:                    96d25b24-9958-4e84-b55d-0be006e0dab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2mwrb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-flblg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-vtr9p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-174036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2c2n8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-174036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-174036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-s6jdn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-174036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-174036                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 14m    kube-proxy       
	  Normal   Starting                 2m35s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-174036 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-174036 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-174036 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-174036 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Warning  ContainerGCFailed        4m15s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m29s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           2m18s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           31s    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	
	
	Name:               ha-174036-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:59:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-174036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8093ac6d205c434d94cbb70f3b2823ae
	  System UUID:                8093ac6d-205c-434d-94cb-b70f3b2823ae
	  Boot ID:                    4309dfee-41d4-42dc-a2fc-9f32b8231986
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wtxzv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-174036-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-k4d8x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-174036-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-174036-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-xwvdm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-174036-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-174036-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m3s                 kube-proxy       
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)    kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  NodeNotReady             9m39s                node-controller  Node ha-174036-m02 status is now: NodeNotReady
	  Normal  Starting                 3m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x7 over 3m3s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m29s                node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           2m18s                node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           31s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	
	
	Name:               ha-174036-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_47_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:59:25 +0000   Thu, 25 Jul 2024 17:58:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:59:25 +0000   Thu, 25 Jul 2024 17:58:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:59:25 +0000   Thu, 25 Jul 2024 17:58:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:59:25 +0000   Thu, 25 Jul 2024 17:58:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-174036-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 45503b4610a245398fdd1551d18f3934
	  System UUID:                45503b46-10a2-4539-8fdd-1551d18f3934
	  Boot ID:                    77c12131-dcf2-42ee-a1bb-eb72e7e16375
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qqdtg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-174036-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-fcznc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-174036-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-174036-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5klkv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-174036-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-174036-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 43s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node ha-174036-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-174036-m03 status is now: NodeReady
	  Normal   RegisteredNode           2m29s              node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	  Normal   NodeNotReady             109s               node-controller  Node ha-174036-m03 status is now: NodeNotReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s (x3 over 61s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x3 over 61s)  kubelet          Node ha-174036-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x3 over 61s)  kubelet          Node ha-174036-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 61s (x2 over 61s)  kubelet          Node ha-174036-m03 has been rebooted, boot id: 77c12131-dcf2-42ee-a1bb-eb72e7e16375
	  Normal   NodeReady                61s (x2 over 61s)  kubelet          Node ha-174036-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-174036-m03 event: Registered Node ha-174036-m03 in Controller
	
	
	Name:               ha-174036-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_49_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:48:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:59:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 17:59:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 17:59:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 17:59:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 17:59:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-174036-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccffe731755d4ecfa1441a8d697922a2
	  System UUID:                ccffe731-755d-4ecf-a144-1a8d697922a2
	  Boot ID:                    d6b478e5-59e0-40f8-95c9-ca04c497bf40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bvhcw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-cvcj9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-174036-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m29s              node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   NodeNotReady             109s               node-controller  Node ha-174036-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-174036-m04 has been rebooted, boot id: d6b478e5-59e0-40f8-95c9-ca04c497bf40
	  Normal   NodeReady                8s                 kubelet          Node ha-174036-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.777476] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056188] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.174852] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114710] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.260280] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.890204] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.211746] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064261] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251761] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.094069] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.327144] kauditd_printk_skb: 21 callbacks suppressed
	[Jul25 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +46.764801] kauditd_printk_skb: 26 callbacks suppressed
	[Jul25 17:56] systemd-fstab-generator[3594]: Ignoring "noauto" option for root device
	[  +0.146485] systemd-fstab-generator[3606]: Ignoring "noauto" option for root device
	[  +0.166626] systemd-fstab-generator[3620]: Ignoring "noauto" option for root device
	[  +0.152973] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.263064] systemd-fstab-generator[3660]: Ignoring "noauto" option for root device
	[  +0.748032] systemd-fstab-generator[3761]: Ignoring "noauto" option for root device
	[  +7.209837] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.021578] kauditd_printk_skb: 65 callbacks suppressed
	[Jul25 17:57] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.034087] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a86915f6f4240edfadb] <==
	{"level":"warn","ts":"2024-07-25T17:58:49.145442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:58:49.239171Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-25T17:58:49.239378Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-25T17:58:49.245924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-25T17:58:52.116609Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:52.116733Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:54.240107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:54.240226Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:56.118962Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:56.119178Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:59.241238Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:58:59.24128Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:00.12136Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:00.12142Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:04.123825Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.253:2380/version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:04.123953Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"28eb4253c22010c1","error":"Get \"https://192.168.39.253:2380/version\": dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:04.242124Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-25T17:59:04.242211Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"28eb4253c22010c1","rtt":"0s","error":"dial tcp 192.168.39.253:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-25T17:59:07.090895Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.105629Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.112621Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"28eb4253c22010c1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-25T17:59:07.112717Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.121981Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.122892Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"28eb4253c22010c1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-25T17:59:07.123112Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	
	
	==> etcd [5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9] <==
	2024/07/25 17:54:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:54:58.063709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.326849552s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-25T17:54:58.063742Z","caller":"traceutil/trace.go:171","msg":"trace[522483704] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"1.326904117s","start":"2024-07-25T17:54:56.736833Z","end":"2024-07-25T17:54:58.063738Z","steps":["trace[522483704] 'agreement among raft nodes before linearized reading'  (duration: 1.326866697s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:54:58.063886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:54:56.736823Z","time spent":"1.327053279s","remote":"127.0.0.1:45344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:500 "}
	2024/07/25 17:54:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:54:58.128671Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:54:58.128756Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T17:54:58.128885Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-25T17:54:58.129074Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129123Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129157Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129224Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.12933Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129389Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129402Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129408Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129417Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.12947Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129555Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129581Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129633Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129656Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.132187Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-07-25T17:54:58.132354Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-07-25T17:54:58.13239Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-174036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 17:59:56 up 14 min,  0 users,  load average: 0.50, 0.47, 0.32
	Linux ha-174036 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944] <==
	I0725 17:59:19.679499       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:59:29.673915       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:59:29.673979       1 main.go:299] handling current node
	I0725 17:59:29.673999       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:59:29.674004       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:59:29.674221       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:59:29.674240       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:59:29.674324       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:59:29.674339       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:59:39.673079       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:59:39.673132       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:59:39.673487       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:59:39.673521       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:59:39.673636       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:59:39.673657       1 main.go:299] handling current node
	I0725 17:59:39.673677       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:59:39.673687       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:59:49.672881       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:59:49.672968       1 main.go:299] handling current node
	I0725 17:59:49.673009       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:59:49.673017       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:59:49.673210       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:59:49.673239       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:59:49.673334       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:59:49.673403       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad] <==
	I0725 17:54:30.455506       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:54:30.455592       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:30.455611       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:40.453888       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:40.453977       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:40.454318       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:54:40.454342       1 main.go:299] handling current node
	I0725 17:54:40.454354       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:54:40.454360       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:54:40.454413       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:54:40.454418       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	E0725 17:54:43.135441       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1854&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0725 17:54:50.456258       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:54:50.456350       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:54:50.456518       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:50.456541       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:50.456721       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:54:50.456745       1 main.go:299] handling current node
	I0725 17:54:50.456761       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:54:50.456810       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	W0725 17:54:55.871417       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I0725 17:54:55.875166       1 trace.go:236] Trace[1073771757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232 (25-Jul-2024 17:54:44.077) (total time: 11794ms):
	Trace[1073771757]: ---"Objects listed" error:Unauthorized 11794ms (17:54:55.871)
	Trace[1073771757]: [11.794158331s] [11.794158331s] END
	E0725 17:54:55.875231       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765] <==
	I0725 17:57:24.518240       1 naming_controller.go:291] Starting NamingConditionController
	I0725 17:57:24.518271       1 establishing_controller.go:76] Starting EstablishingController
	I0725 17:57:24.518296       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 17:57:24.582329       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 17:57:24.592707       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:57:24.592819       1 policy_source.go:224] refreshing policies
	I0725 17:57:24.602995       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 17:57:24.609893       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 17:57:24.609951       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 17:57:24.611558       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 17:57:24.611675       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 17:57:24.611708       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 17:57:24.614877       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 17:57:24.618139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 17:57:24.618936       1 aggregator.go:165] initial CRD sync complete...
	I0725 17:57:24.619033       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 17:57:24.619060       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 17:57:24.619141       1 cache.go:39] Caches are synced for autoregister controller
	I0725 17:57:24.631762       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0725 17:57:24.648507       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.197 192.168.39.253]
	I0725 17:57:24.650453       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 17:57:24.664866       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0725 17:57:24.668527       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0725 17:57:25.514983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0725 17:57:25.893826       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.197 192.168.39.253]
	
	
	==> kube-apiserver [3a314f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b] <==
	I0725 17:56:39.008019       1 options.go:221] external host was not specified, using 192.168.39.165
	I0725 17:56:39.009048       1 server.go:148] Version: v1.30.3
	I0725 17:56:39.009097       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:56:39.497426       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0725 17:56:39.499862       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:56:39.504327       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0725 17:56:39.504359       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0725 17:56:39.504561       1 instance.go:299] Using reconciler: lease
	W0725 17:56:59.497250       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0725 17:56:59.497251       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0725 17:56:59.509994       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0725 17:56:59.510001       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013] <==
	I0725 17:57:37.018365       1 shared_informer.go:320] Caches are synced for persistent volume
	I0725 17:57:37.021287       1 shared_informer.go:320] Caches are synced for PVC protection
	I0725 17:57:37.023992       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:57:37.046401       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:57:37.452165       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:57:37.484979       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:57:37.485027       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0725 17:57:38.257625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.404µs"
	I0725 17:57:43.632034       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.594µs"
	I0725 17:57:56.758275       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.932271ms"
	I0725 17:57:56.758396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.165µs"
	I0725 17:58:01.220024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.806748ms"
	I0725 17:58:01.220273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.997µs"
	I0725 17:58:01.245558       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rpctz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rpctz\": the object has been modified; please apply your changes to the latest version and try again"
	I0725 17:58:01.245807       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b3ca31f8-d9c1-4e0c-b52b-9e601f9daca7", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rpctz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rpctz": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:58:07.011222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.283491ms"
	I0725 17:58:07.011303       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.466µs"
	I0725 17:58:21.213598       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.484365ms"
	I0725 17:58:21.216624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.09µs"
	I0725 17:58:21.215970       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rpctz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rpctz\": the object has been modified; please apply your changes to the latest version and try again"
	I0725 17:58:21.216477       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"b3ca31f8-d9c1-4e0c-b52b-9e601f9daca7", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rpctz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rpctz": the object has been modified; please apply your changes to the latest version and try again
	I0725 17:58:55.642728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.086µs"
	I0725 17:59:15.797454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.401393ms"
	I0725 17:59:15.797542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.917µs"
	I0725 17:59:47.502459       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174036-m04"
	
	
	==> kube-controller-manager [e23512c087fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4] <==
	I0725 17:56:39.856531       1 serving.go:380] Generated self-signed cert in-memory
	I0725 17:56:40.115589       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0725 17:56:40.115678       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:56:40.117847       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:56:40.117987       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:56:40.118142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0725 17:56:40.118230       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0725 17:57:00.518687       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.165:8443/healthz\": dial tcp 192.168.39.165:8443: connect: connection refused"
	
	
	==> kube-proxy [3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136] <==
	E0725 17:53:44.896303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.239402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.239626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.241385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:03.456925       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:03.457534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:06.528848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:06.528924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:06.528830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:06.529112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:18.815440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:18.815495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:31.104598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:31.104851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:31.104931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:31.104986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa] <==
	I0725 17:56:39.909521       1 server_linux.go:69] "Using iptables proxy"
	E0725 17:56:40.128686       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:43.199215       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:46.272103       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:52.416263       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:57:01.631216       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0725 17:57:19.848817       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0725 17:57:19.940945       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:57:19.941033       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:57:19.941058       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:57:19.955288       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:57:19.956117       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:57:19.956202       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:57:19.958401       1 config.go:192] "Starting service config controller"
	I0725 17:57:19.958455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:57:19.958506       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:57:19.958527       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:57:19.959300       1 config.go:319] "Starting node config controller"
	I0725 17:57:19.959338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:57:20.058739       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:57:20.058955       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:57:20.060724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7bbd7762992c8f6c65c1db489cdbc5e30de5e522cb55ef194c2957dc6c00506a] <==
	W0725 17:57:15.254658       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:15.254821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:15.305394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.165:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:15.305493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.165:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:16.150463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:16.150655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:16.708896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:16.709017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:16.896690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.165:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:16.896890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.165:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:17.503933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:17.504001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:19.231282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:19.231373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:19.649359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.165:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:19.649429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.165:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:20.058376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:20.058433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:20.786737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:20.786932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:21.696353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.165:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:21.696471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.165:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:22.352374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.165:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:22.352439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.165:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	I0725 17:57:36.627164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002] <==
	E0725 17:54:50.183128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:54:50.462190       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:50.462279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:50.733222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 17:54:50.733363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 17:54:50.761711       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 17:54:50.761888       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 17:54:50.935891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:54:50.936043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 17:54:51.973387       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:51.973492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:52.038407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 17:54:52.038533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 17:54:52.289344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:54:52.289451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:54:52.688718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 17:54:52.688958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 17:54:52.890362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:52.890491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:57.987090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:57.987123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0725 17:54:58.019657       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 17:54:58.019972       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0725 17:54:58.020142       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 17:54:58.020379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 25 17:57:20 ha-174036 kubelet[1362]: E0725 17:57:20.828382    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c9354422-69ff-4676-80d1-4940badf9b4e)\"" pod="kube-system/storage-provisioner" podUID="c9354422-69ff-4676-80d1-4940badf9b4e"
	Jul 25 17:57:22 ha-174036 kubelet[1362]: I0725 17:57:22.807935    1362 scope.go:117] "RemoveContainer" containerID="3a314f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b"
	Jul 25 17:57:34 ha-174036 kubelet[1362]: I0725 17:57:34.807736    1362 scope.go:117] "RemoveContainer" containerID="274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963"
	Jul 25 17:57:34 ha-174036 kubelet[1362]: E0725 17:57:34.808953    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c9354422-69ff-4676-80d1-4940badf9b4e)\"" pod="kube-system/storage-provisioner" podUID="c9354422-69ff-4676-80d1-4940badf9b4e"
	Jul 25 17:57:40 ha-174036 kubelet[1362]: E0725 17:57:40.855267    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:57:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:57:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:57:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:57:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:57:45 ha-174036 kubelet[1362]: I0725 17:57:45.807339    1362 scope.go:117] "RemoveContainer" containerID="274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963"
	Jul 25 17:57:45 ha-174036 kubelet[1362]: E0725 17:57:45.807568    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c9354422-69ff-4676-80d1-4940badf9b4e)\"" pod="kube-system/storage-provisioner" podUID="c9354422-69ff-4676-80d1-4940badf9b4e"
	Jul 25 17:57:58 ha-174036 kubelet[1362]: I0725 17:57:58.807712    1362 scope.go:117] "RemoveContainer" containerID="274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963"
	Jul 25 17:58:10 ha-174036 kubelet[1362]: I0725 17:58:10.808166    1362 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-174036" podUID="2ce4bfe5-5441-4a28-889e-7743367f32b2"
	Jul 25 17:58:10 ha-174036 kubelet[1362]: I0725 17:58:10.837984    1362 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-174036"
	Jul 25 17:58:20 ha-174036 kubelet[1362]: I0725 17:58:20.826841    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-174036" podStartSLOduration=10.826748309 podStartE2EDuration="10.826748309s" podCreationTimestamp="2024-07-25 17:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-25 17:58:20.826291809 +0000 UTC m=+760.178563911" watchObservedRunningTime="2024-07-25 17:58:20.826748309 +0000 UTC m=+760.179020410"
	Jul 25 17:58:40 ha-174036 kubelet[1362]: E0725 17:58:40.851874    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:58:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:59:40 ha-174036 kubelet[1362]: E0725 17:59:40.856128    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:59:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 17:59:54.840125   31540 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19326-5877/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174036 -n ha-174036
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (421.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 stop -v=7 --alsologtostderr
E0725 18:01:58.589652   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 stop -v=7 --alsologtostderr: exit status 82 (2m0.470861702s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174036-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:00:14.344608   31947 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:00:14.344865   31947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:00:14.344877   31947 out.go:304] Setting ErrFile to fd 2...
	I0725 18:00:14.344884   31947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:00:14.345099   31947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:00:14.345304   31947 out.go:298] Setting JSON to false
	I0725 18:00:14.345380   31947 mustload.go:65] Loading cluster: ha-174036
	I0725 18:00:14.345689   31947 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:00:14.345768   31947 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 18:00:14.345932   31947 mustload.go:65] Loading cluster: ha-174036
	I0725 18:00:14.346050   31947 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:00:14.346073   31947 stop.go:39] StopHost: ha-174036-m04
	I0725 18:00:14.346417   31947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:00:14.346458   31947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:00:14.361020   31947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0725 18:00:14.361543   31947 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:00:14.362134   31947 main.go:141] libmachine: Using API Version  1
	I0725 18:00:14.362166   31947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:00:14.362481   31947 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:00:14.364788   31947 out.go:177] * Stopping node "ha-174036-m04"  ...
	I0725 18:00:14.366221   31947 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 18:00:14.366252   31947 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 18:00:14.366455   31947 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 18:00:14.366476   31947 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 18:00:14.369403   31947 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:00:14.369960   31947 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:59:41 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 18:00:14.369996   31947 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:00:14.370187   31947 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 18:00:14.370513   31947 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 18:00:14.370710   31947 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 18:00:14.370891   31947 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	I0725 18:00:14.451055   31947 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 18:00:14.503607   31947 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 18:00:14.556493   31947 main.go:141] libmachine: Stopping "ha-174036-m04"...
	I0725 18:00:14.556522   31947 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 18:00:14.558081   31947 main.go:141] libmachine: (ha-174036-m04) Calling .Stop
	I0725 18:00:14.561653   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 0/120
	I0725 18:00:15.563273   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 1/120
	I0725 18:00:16.564915   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 2/120
	I0725 18:00:17.566712   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 3/120
	I0725 18:00:18.568786   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 4/120
	I0725 18:00:19.570825   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 5/120
	I0725 18:00:20.572291   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 6/120
	I0725 18:00:21.574655   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 7/120
	I0725 18:00:22.576293   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 8/120
	I0725 18:00:23.577661   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 9/120
	I0725 18:00:24.580053   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 10/120
	I0725 18:00:25.582311   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 11/120
	I0725 18:00:26.583630   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 12/120
	I0725 18:00:27.584919   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 13/120
	I0725 18:00:28.586913   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 14/120
	I0725 18:00:29.589192   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 15/120
	I0725 18:00:30.591405   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 16/120
	I0725 18:00:31.592891   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 17/120
	I0725 18:00:32.594975   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 18/120
	I0725 18:00:33.596352   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 19/120
	I0725 18:00:34.598471   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 20/120
	I0725 18:00:35.599915   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 21/120
	I0725 18:00:36.601188   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 22/120
	I0725 18:00:37.602846   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 23/120
	I0725 18:00:38.604213   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 24/120
	I0725 18:00:39.605953   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 25/120
	I0725 18:00:40.607402   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 26/120
	I0725 18:00:41.608920   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 27/120
	I0725 18:00:42.610483   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 28/120
	I0725 18:00:43.611966   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 29/120
	I0725 18:00:44.614141   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 30/120
	I0725 18:00:45.615831   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 31/120
	I0725 18:00:46.617824   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 32/120
	I0725 18:00:47.619177   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 33/120
	I0725 18:00:48.620449   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 34/120
	I0725 18:00:49.622554   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 35/120
	I0725 18:00:50.623877   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 36/120
	I0725 18:00:51.625358   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 37/120
	I0725 18:00:52.626728   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 38/120
	I0725 18:00:53.628063   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 39/120
	I0725 18:00:54.630535   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 40/120
	I0725 18:00:55.632014   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 41/120
	I0725 18:00:56.633736   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 42/120
	I0725 18:00:57.636000   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 43/120
	I0725 18:00:58.637586   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 44/120
	I0725 18:00:59.639644   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 45/120
	I0725 18:01:00.641025   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 46/120
	I0725 18:01:01.642925   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 47/120
	I0725 18:01:02.644196   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 48/120
	I0725 18:01:03.645562   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 49/120
	I0725 18:01:04.647944   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 50/120
	I0725 18:01:05.649699   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 51/120
	I0725 18:01:06.651087   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 52/120
	I0725 18:01:07.652652   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 53/120
	I0725 18:01:08.654038   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 54/120
	I0725 18:01:09.656177   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 55/120
	I0725 18:01:10.658372   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 56/120
	I0725 18:01:11.659906   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 57/120
	I0725 18:01:12.661277   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 58/120
	I0725 18:01:13.662645   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 59/120
	I0725 18:01:14.665098   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 60/120
	I0725 18:01:15.666552   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 61/120
	I0725 18:01:16.668182   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 62/120
	I0725 18:01:17.669755   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 63/120
	I0725 18:01:18.672187   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 64/120
	I0725 18:01:19.674307   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 65/120
	I0725 18:01:20.675984   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 66/120
	I0725 18:01:21.677394   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 67/120
	I0725 18:01:22.678815   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 68/120
	I0725 18:01:23.680127   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 69/120
	I0725 18:01:24.682182   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 70/120
	I0725 18:01:25.683592   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 71/120
	I0725 18:01:26.685975   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 72/120
	I0725 18:01:27.687561   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 73/120
	I0725 18:01:28.689240   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 74/120
	I0725 18:01:29.691709   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 75/120
	I0725 18:01:30.693078   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 76/120
	I0725 18:01:31.694748   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 77/120
	I0725 18:01:32.696165   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 78/120
	I0725 18:01:33.697419   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 79/120
	I0725 18:01:34.699912   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 80/120
	I0725 18:01:35.701237   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 81/120
	I0725 18:01:36.702713   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 82/120
	I0725 18:01:37.704062   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 83/120
	I0725 18:01:38.705631   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 84/120
	I0725 18:01:39.707757   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 85/120
	I0725 18:01:40.709050   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 86/120
	I0725 18:01:41.711060   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 87/120
	I0725 18:01:42.712656   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 88/120
	I0725 18:01:43.715015   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 89/120
	I0725 18:01:44.717239   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 90/120
	I0725 18:01:45.719046   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 91/120
	I0725 18:01:46.720261   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 92/120
	I0725 18:01:47.722564   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 93/120
	I0725 18:01:48.724216   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 94/120
	I0725 18:01:49.726044   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 95/120
	I0725 18:01:50.727453   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 96/120
	I0725 18:01:51.728803   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 97/120
	I0725 18:01:52.730807   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 98/120
	I0725 18:01:53.732956   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 99/120
	I0725 18:01:54.735110   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 100/120
	I0725 18:01:55.736452   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 101/120
	I0725 18:01:56.737940   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 102/120
	I0725 18:01:57.739429   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 103/120
	I0725 18:01:58.740924   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 104/120
	I0725 18:01:59.742536   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 105/120
	I0725 18:02:00.744705   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 106/120
	I0725 18:02:01.746153   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 107/120
	I0725 18:02:02.747599   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 108/120
	I0725 18:02:03.749069   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 109/120
	I0725 18:02:04.751182   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 110/120
	I0725 18:02:05.752661   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 111/120
	I0725 18:02:06.754732   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 112/120
	I0725 18:02:07.756291   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 113/120
	I0725 18:02:08.757894   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 114/120
	I0725 18:02:09.759510   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 115/120
	I0725 18:02:10.761586   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 116/120
	I0725 18:02:11.763017   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 117/120
	I0725 18:02:12.764201   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 118/120
	I0725 18:02:13.765650   31947 main.go:141] libmachine: (ha-174036-m04) Waiting for machine to stop 119/120
	I0725 18:02:14.766265   31947 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 18:02:14.766322   31947 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0725 18:02:14.768116   31947 out.go:177] 
	W0725 18:02:14.769555   31947 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0725 18:02:14.769576   31947 out.go:239] * 
	* 
	W0725 18:02:14.771846   31947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:02:14.773187   31947 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-174036 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr: exit status 3 (18.985745975s)

                                                
                                                
-- stdout --
	ha-174036
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174036-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:02:14.820672   32365 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:02:14.820790   32365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:02:14.820798   32365 out.go:304] Setting ErrFile to fd 2...
	I0725 18:02:14.820802   32365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:02:14.820975   32365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:02:14.821126   32365 out.go:298] Setting JSON to false
	I0725 18:02:14.821154   32365 mustload.go:65] Loading cluster: ha-174036
	I0725 18:02:14.821264   32365 notify.go:220] Checking for updates...
	I0725 18:02:14.821509   32365 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:02:14.821525   32365 status.go:255] checking status of ha-174036 ...
	I0725 18:02:14.821893   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:14.821956   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:14.837609   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0725 18:02:14.838123   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:14.838639   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:14.838654   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:14.839024   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:14.839208   32365 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 18:02:14.840754   32365 status.go:330] ha-174036 host status = "Running" (err=<nil>)
	I0725 18:02:14.840771   32365 host.go:66] Checking if "ha-174036" exists ...
	I0725 18:02:14.841053   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:14.841090   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:14.855796   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0725 18:02:14.856175   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:14.856674   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:14.856705   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:14.857030   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:14.857257   32365 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 18:02:14.860312   32365 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 18:02:14.860776   32365 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 18:02:14.860798   32365 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 18:02:14.860955   32365 host.go:66] Checking if "ha-174036" exists ...
	I0725 18:02:14.861330   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:14.861369   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:14.876309   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0725 18:02:14.876804   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:14.877370   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:14.877398   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:14.877696   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:14.877899   32365 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 18:02:14.878121   32365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:02:14.878157   32365 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 18:02:14.881477   32365 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 18:02:14.881865   32365 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 18:02:14.881895   32365 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 18:02:14.882050   32365 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 18:02:14.882231   32365 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 18:02:14.882409   32365 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 18:02:14.882562   32365 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 18:02:14.965169   32365 ssh_runner.go:195] Run: systemctl --version
	I0725 18:02:14.972098   32365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:02:14.987998   32365 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 18:02:14.988023   32365 api_server.go:166] Checking apiserver status ...
	I0725 18:02:14.988052   32365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:02:15.002917   32365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5051/cgroup
	W0725 18:02:15.011961   32365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5051/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:02:15.012017   32365 ssh_runner.go:195] Run: ls
	I0725 18:02:15.016377   32365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 18:02:15.020370   32365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 18:02:15.020390   32365 status.go:422] ha-174036 apiserver status = Running (err=<nil>)
	I0725 18:02:15.020399   32365 status.go:257] ha-174036 status: &{Name:ha-174036 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:02:15.020416   32365 status.go:255] checking status of ha-174036-m02 ...
	I0725 18:02:15.020778   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.020819   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.036118   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0725 18:02:15.036624   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.037102   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.037129   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.037452   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.037682   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetState
	I0725 18:02:15.039418   32365 status.go:330] ha-174036-m02 host status = "Running" (err=<nil>)
	I0725 18:02:15.039434   32365 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 18:02:15.039732   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.039771   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.054893   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36399
	I0725 18:02:15.055316   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.055736   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.055758   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.056106   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.056272   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetIP
	I0725 18:02:15.059227   32365 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 18:02:15.059692   32365 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:56:42 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 18:02:15.059719   32365 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 18:02:15.059868   32365 host.go:66] Checking if "ha-174036-m02" exists ...
	I0725 18:02:15.060143   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.060181   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.075254   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34691
	I0725 18:02:15.075683   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.076127   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.076153   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.076615   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.076785   32365 main.go:141] libmachine: (ha-174036-m02) Calling .DriverName
	I0725 18:02:15.076969   32365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:02:15.076987   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHHostname
	I0725 18:02:15.079748   32365 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 18:02:15.080136   32365 main.go:141] libmachine: (ha-174036-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:a1:05", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:56:42 +0000 UTC Type:0 Mac:52:54:00:75:a1:05 Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:ha-174036-m02 Clientid:01:52:54:00:75:a1:05}
	I0725 18:02:15.080164   32365 main.go:141] libmachine: (ha-174036-m02) DBG | domain ha-174036-m02 has defined IP address 192.168.39.197 and MAC address 52:54:00:75:a1:05 in network mk-ha-174036
	I0725 18:02:15.080282   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHPort
	I0725 18:02:15.080474   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHKeyPath
	I0725 18:02:15.080631   32365 main.go:141] libmachine: (ha-174036-m02) Calling .GetSSHUsername
	I0725 18:02:15.080766   32365 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m02/id_rsa Username:docker}
	I0725 18:02:15.164444   32365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:02:15.181933   32365 kubeconfig.go:125] found "ha-174036" server: "https://192.168.39.254:8443"
	I0725 18:02:15.181966   32365 api_server.go:166] Checking apiserver status ...
	I0725 18:02:15.181996   32365 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:02:15.196999   32365 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W0725 18:02:15.206191   32365 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:02:15.206238   32365 ssh_runner.go:195] Run: ls
	I0725 18:02:15.210265   32365 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0725 18:02:15.214414   32365 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0725 18:02:15.214439   32365 status.go:422] ha-174036-m02 apiserver status = Running (err=<nil>)
	I0725 18:02:15.214450   32365 status.go:257] ha-174036-m02 status: &{Name:ha-174036-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:02:15.214468   32365 status.go:255] checking status of ha-174036-m04 ...
	I0725 18:02:15.214832   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.214864   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.229778   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0725 18:02:15.230179   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.230694   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.230715   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.231038   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.231235   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetState
	I0725 18:02:15.232838   32365 status.go:330] ha-174036-m04 host status = "Running" (err=<nil>)
	I0725 18:02:15.232855   32365 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 18:02:15.233120   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.233168   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.247918   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0725 18:02:15.248355   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.248848   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.248872   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.249186   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.249380   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetIP
	I0725 18:02:15.252445   32365 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:02:15.252828   32365 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:59:41 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 18:02:15.252853   32365 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:02:15.252982   32365 host.go:66] Checking if "ha-174036-m04" exists ...
	I0725 18:02:15.253280   32365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:02:15.253313   32365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:02:15.267778   32365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0725 18:02:15.268349   32365 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:02:15.268770   32365 main.go:141] libmachine: Using API Version  1
	I0725 18:02:15.268790   32365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:02:15.269061   32365 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:02:15.269239   32365 main.go:141] libmachine: (ha-174036-m04) Calling .DriverName
	I0725 18:02:15.269432   32365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:02:15.269457   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHHostname
	I0725 18:02:15.272508   32365 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:02:15.272851   32365 main.go:141] libmachine: (ha-174036-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:d9:2c", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:59:41 +0000 UTC Type:0 Mac:52:54:00:02:d9:2c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-174036-m04 Clientid:01:52:54:00:02:d9:2c}
	I0725 18:02:15.272893   32365 main.go:141] libmachine: (ha-174036-m04) DBG | domain ha-174036-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:02:d9:2c in network mk-ha-174036
	I0725 18:02:15.273025   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHPort
	I0725 18:02:15.273180   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHKeyPath
	I0725 18:02:15.273347   32365 main.go:141] libmachine: (ha-174036-m04) Calling .GetSSHUsername
	I0725 18:02:15.273470   32365 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036-m04/id_rsa Username:docker}
	W0725 18:02:33.760570   32365 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0725 18:02:33.760664   32365 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0725 18:02:33.760686   32365 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0725 18:02:33.760697   32365 status.go:257] ha-174036-m04 status: &{Name:ha-174036-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0725 18:02:33.760737   32365 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174036 -n ha-174036
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174036 logs -n 25: (1.6200409s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m04 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp testdata/cp-test.txt                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036:/home/docker/cp-test_ha-174036-m04_ha-174036.txt                      |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036 sudo cat                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036.txt                                |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m02:/home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m02 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m03:/home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n                                                                | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | ha-174036-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-174036 ssh -n ha-174036-m03 sudo cat                                         | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC | 25 Jul 24 17:49 UTC |
	|         | /home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-174036 node stop m02 -v=7                                                    | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-174036 node start m02 -v=7                                                   | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-174036 -v=7                                                          | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-174036 -v=7                                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-174036 --wait=true -v=7                                                   | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:54 UTC | 25 Jul 24 17:59 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-174036                                                               | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:59 UTC |                     |
	| node    | ha-174036 node delete m03 -v=7                                                  | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 17:59 UTC | 25 Jul 24 18:00 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-174036 stop -v=7                                                             | ha-174036 | jenkins | v1.33.1 | 25 Jul 24 18:00 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:54:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:54:56.890131   29986 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:54:56.890498   29986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:54:56.890534   29986 out.go:304] Setting ErrFile to fd 2...
	I0725 17:54:56.890542   29986 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:54:56.890982   29986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:54:56.891757   29986 out.go:298] Setting JSON to false
	I0725 17:54:56.892759   29986 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2241,"bootTime":1721927856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:54:56.892818   29986 start.go:139] virtualization: kvm guest
	I0725 17:54:56.894739   29986 out.go:177] * [ha-174036] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:54:56.896670   29986 notify.go:220] Checking for updates...
	I0725 17:54:56.896755   29986 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:54:56.898451   29986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:54:56.899800   29986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:54:56.901034   29986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:54:56.902363   29986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:54:56.903836   29986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:54:56.905583   29986 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:54:56.905701   29986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:54:56.906142   29986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:54:56.906206   29986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:54:56.920947   29986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0725 17:54:56.921435   29986 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:54:56.922104   29986 main.go:141] libmachine: Using API Version  1
	I0725 17:54:56.922147   29986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:54:56.922436   29986 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:54:56.922615   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.957254   29986 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 17:54:56.958789   29986 start.go:297] selected driver: kvm2
	I0725 17:54:56.958805   29986 start.go:901] validating driver "kvm2" against &{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:54:56.958939   29986 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:54:56.959269   29986 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:54:56.959356   29986 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:54:56.973916   29986 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:54:56.974535   29986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 17:54:56.974567   29986 cni.go:84] Creating CNI manager for ""
	I0725 17:54:56.974573   29986 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 17:54:56.974636   29986 start.go:340] cluster config:
	{Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:54:56.974759   29986 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:54:56.976638   29986 out.go:177] * Starting "ha-174036" primary control-plane node in "ha-174036" cluster
	I0725 17:54:56.977873   29986 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:54:56.977910   29986 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:54:56.977917   29986 cache.go:56] Caching tarball of preloaded images
	I0725 17:54:56.978004   29986 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 17:54:56.978014   29986 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 17:54:56.978120   29986 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/config.json ...
	I0725 17:54:56.978307   29986 start.go:360] acquireMachinesLock for ha-174036: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 17:54:56.978351   29986 start.go:364] duration metric: took 22.555µs to acquireMachinesLock for "ha-174036"
	I0725 17:54:56.978362   29986 start.go:96] Skipping create...Using existing machine configuration
	I0725 17:54:56.978369   29986 fix.go:54] fixHost starting: 
	I0725 17:54:56.978617   29986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:54:56.978644   29986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:54:56.992629   29986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0725 17:54:56.993031   29986 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:54:56.993426   29986 main.go:141] libmachine: Using API Version  1
	I0725 17:54:56.993444   29986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:54:56.993732   29986 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:54:56.993903   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.994034   29986 main.go:141] libmachine: (ha-174036) Calling .GetState
	I0725 17:54:56.995687   29986 fix.go:112] recreateIfNeeded on ha-174036: state=Running err=<nil>
	W0725 17:54:56.995723   29986 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 17:54:56.997586   29986 out.go:177] * Updating the running kvm2 "ha-174036" VM ...
	I0725 17:54:56.998956   29986 machine.go:94] provisionDockerMachine start ...
	I0725 17:54:56.998977   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:54:56.999186   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.001597   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.001989   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.002014   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.002153   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.002330   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.002479   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.002596   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.002710   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.002882   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.002895   29986 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 17:54:57.113130   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:54:57.113158   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.113413   29986 buildroot.go:166] provisioning hostname "ha-174036"
	I0725 17:54:57.113447   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.113669   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.116172   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.116589   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.116618   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.116753   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.116913   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.117082   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.117195   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.117325   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.117471   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.117481   29986 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174036 && echo "ha-174036" | sudo tee /etc/hostname
	I0725 17:54:57.241654   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174036
	
	I0725 17:54:57.241682   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.244479   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.244878   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.244921   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.245050   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.245234   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.245409   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.245664   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.245879   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.246087   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.246110   29986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174036/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 17:54:57.352726   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 17:54:57.352760   29986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 17:54:57.352817   29986 buildroot.go:174] setting up certificates
	I0725 17:54:57.352831   29986 provision.go:84] configureAuth start
	I0725 17:54:57.352849   29986 main.go:141] libmachine: (ha-174036) Calling .GetMachineName
	I0725 17:54:57.353105   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:54:57.355599   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.356035   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.356071   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.356189   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.358430   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.358756   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.358782   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.358921   29986 provision.go:143] copyHostCerts
	I0725 17:54:57.358950   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:54:57.358991   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 17:54:57.359004   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 17:54:57.359084   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 17:54:57.359241   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:54:57.359269   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 17:54:57.359278   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 17:54:57.359328   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 17:54:57.359405   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:54:57.359428   29986 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 17:54:57.359438   29986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 17:54:57.359471   29986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 17:54:57.359547   29986 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.ha-174036 san=[127.0.0.1 192.168.39.165 ha-174036 localhost minikube]
	I0725 17:54:57.760045   29986 provision.go:177] copyRemoteCerts
	I0725 17:54:57.760105   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 17:54:57.760126   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.762693   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.763208   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.763237   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.763440   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.763671   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.763837   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.763994   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:54:57.846268   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 17:54:57.846335   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 17:54:57.870254   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 17:54:57.870338   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0725 17:54:57.893831   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 17:54:57.893894   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 17:54:57.916731   29986 provision.go:87] duration metric: took 563.886786ms to configureAuth
	I0725 17:54:57.916754   29986 buildroot.go:189] setting minikube options for container-runtime
	I0725 17:54:57.916980   29986 config.go:182] Loaded profile config "ha-174036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:54:57.917044   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:54:57.919914   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.920281   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:54:57.920430   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:54:57.920429   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:54:57.920626   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.920800   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:54:57.920920   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:54:57.921067   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:54:57.921259   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:54:57.921275   29986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 17:56:28.806229   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 17:56:28.806256   29986 machine.go:97] duration metric: took 1m31.807285741s to provisionDockerMachine
	I0725 17:56:28.806270   29986 start.go:293] postStartSetup for "ha-174036" (driver="kvm2")
	I0725 17:56:28.806281   29986 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 17:56:28.806302   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:28.806674   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 17:56:28.806705   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:28.809878   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.810335   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:28.810356   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.810562   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:28.810793   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:28.810967   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:28.811151   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:28.895793   29986 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 17:56:28.900204   29986 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 17:56:28.900235   29986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 17:56:28.900299   29986 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 17:56:28.900448   29986 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 17:56:28.900461   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 17:56:28.900549   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 17:56:28.909785   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:56:28.932696   29986 start.go:296] duration metric: took 126.413581ms for postStartSetup
	I0725 17:56:28.932737   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:28.933085   29986 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0725 17:56:28.933112   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:28.935586   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.935936   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:28.935960   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:28.936084   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:28.936314   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:28.936484   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:28.936646   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	W0725 17:56:29.018400   29986 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0725 17:56:29.018435   29986 fix.go:56] duration metric: took 1m32.040061634s for fixHost
	I0725 17:56:29.018460   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.021226   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.021641   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.021677   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.021848   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.022039   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.022183   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.022326   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.022518   29986 main.go:141] libmachine: Using SSH client type: native
	I0725 17:56:29.022710   29986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0725 17:56:29.022728   29986 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 17:56:29.128817   29986 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721930189.086470918
	
	I0725 17:56:29.128840   29986 fix.go:216] guest clock: 1721930189.086470918
	I0725 17:56:29.128850   29986 fix.go:229] Guest: 2024-07-25 17:56:29.086470918 +0000 UTC Remote: 2024-07-25 17:56:29.018444543 +0000 UTC m=+92.163296824 (delta=68.026375ms)
	I0725 17:56:29.128885   29986 fix.go:200] guest clock delta is within tolerance: 68.026375ms
	I0725 17:56:29.128891   29986 start.go:83] releasing machines lock for "ha-174036", held for 1m32.150534157s
	I0725 17:56:29.128914   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.129180   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:56:29.131987   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.132418   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.132444   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.132598   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133183   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133363   29986 main.go:141] libmachine: (ha-174036) Calling .DriverName
	I0725 17:56:29.133445   29986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 17:56:29.133494   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.133602   29986 ssh_runner.go:195] Run: cat /version.json
	I0725 17:56:29.133626   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHHostname
	I0725 17:56:29.136139   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136206   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136660   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.136683   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136788   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:29.136809   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:29.136952   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.137043   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHPort
	I0725 17:56:29.137090   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.137205   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHKeyPath
	I0725 17:56:29.137270   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.137372   29986 main.go:141] libmachine: (ha-174036) Calling .GetSSHUsername
	I0725 17:56:29.137422   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:29.137486   29986 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/ha-174036/id_rsa Username:docker}
	I0725 17:56:29.247305   29986 ssh_runner.go:195] Run: systemctl --version
	I0725 17:56:29.253279   29986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 17:56:29.416869   29986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 17:56:29.423428   29986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 17:56:29.423498   29986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 17:56:29.432445   29986 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 17:56:29.432478   29986 start.go:495] detecting cgroup driver to use...
	I0725 17:56:29.432572   29986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 17:56:29.449047   29986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 17:56:29.463038   29986 docker.go:217] disabling cri-docker service (if available) ...
	I0725 17:56:29.463106   29986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 17:56:29.476136   29986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 17:56:29.490226   29986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 17:56:29.632955   29986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 17:56:29.777609   29986 docker.go:233] disabling docker service ...
	I0725 17:56:29.777673   29986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 17:56:29.793795   29986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 17:56:29.807156   29986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 17:56:29.952347   29986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 17:56:30.094138   29986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 17:56:30.107832   29986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 17:56:30.126830   29986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 17:56:30.126899   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.136794   29986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 17:56:30.136864   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.146631   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.156062   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.165586   29986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 17:56:30.175506   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.185304   29986 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.195745   29986 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 17:56:30.205402   29986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 17:56:30.214176   29986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 17:56:30.223439   29986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:56:30.365113   29986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 17:56:30.632242   29986 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 17:56:30.632309   29986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 17:56:30.636926   29986 start.go:563] Will wait 60s for crictl version
	I0725 17:56:30.636987   29986 ssh_runner.go:195] Run: which crictl
	I0725 17:56:30.640448   29986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 17:56:30.679023   29986 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 17:56:30.679104   29986 ssh_runner.go:195] Run: crio --version
	I0725 17:56:30.707687   29986 ssh_runner.go:195] Run: crio --version
	I0725 17:56:30.737034   29986 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 17:56:30.738601   29986 main.go:141] libmachine: (ha-174036) Calling .GetIP
	I0725 17:56:30.741500   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:30.741957   29986 main.go:141] libmachine: (ha-174036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:45:3b", ip: ""} in network mk-ha-174036: {Iface:virbr1 ExpiryTime:2024-07-25 18:45:14 +0000 UTC Type:0 Mac:52:54:00:0f:45:3b Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-174036 Clientid:01:52:54:00:0f:45:3b}
	I0725 17:56:30.741981   29986 main.go:141] libmachine: (ha-174036) DBG | domain ha-174036 has defined IP address 192.168.39.165 and MAC address 52:54:00:0f:45:3b in network mk-ha-174036
	I0725 17:56:30.742234   29986 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 17:56:30.746742   29986 kubeadm.go:883] updating cluster {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 17:56:30.746864   29986 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:56:30.746916   29986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:56:30.791141   29986 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:56:30.791165   29986 crio.go:433] Images already preloaded, skipping extraction
	I0725 17:56:30.791227   29986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 17:56:30.824517   29986 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 17:56:30.824539   29986 cache_images.go:84] Images are preloaded, skipping loading
	I0725 17:56:30.824555   29986 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0725 17:56:30.824663   29986 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 17:56:30.824728   29986 ssh_runner.go:195] Run: crio config
	I0725 17:56:30.870313   29986 cni.go:84] Creating CNI manager for ""
	I0725 17:56:30.870333   29986 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 17:56:30.870342   29986 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 17:56:30.870367   29986 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174036 NodeName:ha-174036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 17:56:30.870482   29986 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174036"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 17:56:30.870501   29986 kube-vip.go:115] generating kube-vip config ...
	I0725 17:56:30.870540   29986 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0725 17:56:30.881615   29986 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0725 17:56:30.881728   29986 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0725 17:56:30.881788   29986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 17:56:30.890853   29986 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 17:56:30.890909   29986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0725 17:56:30.899778   29986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0725 17:56:30.915238   29986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 17:56:30.930576   29986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0725 17:56:30.946094   29986 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0725 17:56:30.965461   29986 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0725 17:56:30.969283   29986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 17:56:31.115237   29986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 17:56:31.130076   29986 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036 for IP: 192.168.39.165
	I0725 17:56:31.130098   29986 certs.go:194] generating shared ca certs ...
	I0725 17:56:31.130116   29986 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.130398   29986 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 17:56:31.130511   29986 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 17:56:31.130541   29986 certs.go:256] generating profile certs ...
	I0725 17:56:31.130661   29986 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/client.key
	I0725 17:56:31.130702   29986 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643
	I0725 17:56:31.130729   29986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.197 192.168.39.253 192.168.39.254]
	I0725 17:56:31.252990   29986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 ...
	I0725 17:56:31.253026   29986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643: {Name:mkad08bfe7915fa1b928db9aa69060350dde447e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.253204   29986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643 ...
	I0725 17:56:31.253217   29986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643: {Name:mkd35c3c45d71809ec73449c347505fd11f57b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 17:56:31.253297   29986 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt.1cdd1643 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt
	I0725 17:56:31.253454   29986 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key.1cdd1643 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key
	I0725 17:56:31.253592   29986 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key
	I0725 17:56:31.253607   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 17:56:31.253621   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 17:56:31.253636   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 17:56:31.253651   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 17:56:31.253667   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 17:56:31.253681   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 17:56:31.253695   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 17:56:31.253709   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 17:56:31.253765   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 17:56:31.253797   29986 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 17:56:31.253808   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 17:56:31.253837   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 17:56:31.253865   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 17:56:31.253889   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 17:56:31.253929   29986 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 17:56:31.253963   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.253978   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.253997   29986 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.254596   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 17:56:31.279525   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 17:56:31.302765   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 17:56:31.325924   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 17:56:31.351513   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 17:56:31.377394   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 17:56:31.402102   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 17:56:31.429122   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/ha-174036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 17:56:31.455740   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 17:56:31.480119   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 17:56:31.505643   29986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 17:56:31.528647   29986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 17:56:31.545727   29986 ssh_runner.go:195] Run: openssl version
	I0725 17:56:31.551250   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 17:56:31.560995   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.565256   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.565310   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 17:56:31.572286   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 17:56:31.582523   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 17:56:31.592656   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.597140   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.597196   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 17:56:31.602702   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 17:56:31.611762   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 17:56:31.621989   29986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.626565   29986 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.626615   29986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 17:56:31.632380   29986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 17:56:31.641783   29986 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 17:56:31.646297   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 17:56:31.651920   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 17:56:31.657268   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 17:56:31.662599   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 17:56:31.668305   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 17:56:31.673635   29986 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 17:56:31.678881   29986 kubeadm.go:392] StartCluster: {Name:ha-174036 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-174036 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.253 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:56:31.679014   29986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 17:56:31.679055   29986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 17:56:31.718553   29986 cri.go:89] found id: "b662566d7a9cf8ff4572a82511d312ee23d4e19da560a641b4a76bdfed62491b"
	I0725 17:56:31.718580   29986 cri.go:89] found id: "13d64c5040c5e5628b082b1e7381a1e5ec5af82efc473cc83c58123d5bfa0e72"
	I0725 17:56:31.718583   29986 cri.go:89] found id: "5cee1a84014a844d9abe8f83d86ff058ddfd1511faf129e93242f7d3c17cc425"
	I0725 17:56:31.718587   29986 cri.go:89] found id: "0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f"
	I0725 17:56:31.718590   29986 cri.go:89] found id: "35b4910d2dffd4c50b6c81ede4e225d7473a98b0a9834b082d0bcdc49420a72e"
	I0725 17:56:31.718592   29986 cri.go:89] found id: "7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f"
	I0725 17:56:31.718595   29986 cri.go:89] found id: "fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad"
	I0725 17:56:31.718597   29986 cri.go:89] found id: "3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136"
	I0725 17:56:31.718599   29986 cri.go:89] found id: "a61b54c0418380759094fff6b18b34e532b3761b6449b09813306455a54f8ec0"
	I0725 17:56:31.718604   29986 cri.go:89] found id: "0c7004ab2454d3b7076b9779cb99a3d84b827dc83eb1521fd69062d1bca490cd"
	I0725 17:56:31.718606   29986 cri.go:89] found id: "5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9"
	I0725 17:56:31.718609   29986 cri.go:89] found id: "fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002"
	I0725 17:56:31.718612   29986 cri.go:89] found id: "26c724f452769f279e45af6ab3ba27a7a4a86793455b5eabef65ad09403d1526"
	I0725 17:56:31.718615   29986 cri.go:89] found id: ""
	I0725 17:56:31.718654   29986 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.362369139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930554362327829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f53fa2a-0c51-41b3-809e-d9479969d32d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.362983977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d80597a5-196a-4444-a5cb-135fdfecc683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.363061571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d80597a5-196a-4444-a5cb-135fdfecc683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.363478610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d80597a5-196a-4444-a5cb-135fdfecc683 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.404072375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c313bb2-845a-4d0c-b2e5-04389215d92c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.404180965Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c313bb2-845a-4d0c-b2e5-04389215d92c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.405161887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88483503-d454-41e5-a74b-076e0d207132 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.405609795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930554405588347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88483503-d454-41e5-a74b-076e0d207132 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.406196693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef427a50-e531-4fde-a98f-e2c872a01258 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.406269368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef427a50-e531-4fde-a98f-e2c872a01258 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.406824049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef427a50-e531-4fde-a98f-e2c872a01258 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.445911076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a9214f7-9f5d-418c-b1a1-9ba38b883f49 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.446273247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a9214f7-9f5d-418c-b1a1-9ba38b883f49 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.447377847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8181b440-dfb8-47b4-96d3-8f29765a3041 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.447964186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930554447935885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8181b440-dfb8-47b4-96d3-8f29765a3041 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.448436302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a33e61d-1d66-4d28-af9a-feb16d194cdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.448493440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a33e61d-1d66-4d28-af9a-feb16d194cdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.448961429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a33e61d-1d66-4d28-af9a-feb16d194cdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.491750893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b797e16-ea86-4e9e-8cfc-14582cb7ba60 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.491872475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b797e16-ea86-4e9e-8cfc-14582cb7ba60 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.492965950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3a493b5-c45f-4f24-bcb1-fd673bd55065 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.493427804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721930554493403958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3a493b5-c45f-4f24-bcb1-fd673bd55065 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.493972391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43396d7a-5c11-4e6c-954b-6cacd9c07f53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.494039614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43396d7a-5c11-4e6c-954b-6cacd9c07f53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:02:34 ha-174036 crio[3675]: time="2024-07-25 18:02:34.494494199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6a8e16cbbcba142f18289c307b962508fe611c078fb6e6d42238ac8f269ba9b,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721930278824104448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721930242818747710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721930239833708143,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:073042a011f3b051a394218b4e6683492baaa860c841625944ba2726b693ce23,PodSandboxId:6cd3158807dfcef5d5adf5564be66e9a1fcdbd7ca53e8174031bd896f90e40d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721930232088088067,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annotations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963,PodSandboxId:964397aa7270b8658cacfea782193f2c060e7188ed3b6b6d78750d8e353e20ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721930229825397710,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9354422-69ff-4676-80d1-4940badf9b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f055deb,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:644aa0844307921b217853eba501a6fd3004f9ad9d9176422c20761cf63ae9d8,PodSandboxId:3eeb4cb8ff52c049b6a4b3f1361d0745bb774abbbd949524e367b9535755c8a3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721930209457883027,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8da94472d9fcc0702357dfc4c274563,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa,PodSandboxId:444d87aef2ed9ad0ac0e680114fee2b3c66341af8a816078a39369f612b7acb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721930198712748967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944,PodSandboxId:b74c45a0fd7ebb772d9bb56f1443219cd1dcc2958c90a72571921a458be0162a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721930198611256721,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23512c0
87fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4,PodSandboxId:8941cb3be23d39ce46c9398bf1bc3d6c57279306c483384948a4eed0f920a474,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721930198608154492,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60b0c4b255cf168fc0ff6e1b5b5a5e1f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3
14f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b,PodSandboxId:a3172a2fbe2ec44efff094d0f49cfbef20b63a949564fd21255bae2ee564bb94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721930198508208081,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a684b92a47207375cde77b0049b934b,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bbd7762992c8f6c65c1db489
cdbc5e30de5e522cb55ef194c2957dc6c00506a,PodSandboxId:db06f08e58b8da89319c93c1d9e4221e022da68a74bc009d19bafba1d489ecb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721930198465124670,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a
86915f6f4240edfadb,PodSandboxId:f73c39a23d513d08fa42d1047bcdb1a90dca87fef8e9715c41bfaf7ea8a5466f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721930198395950522,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4,PodSandboxId:89cb3bf14d473a17326
b231bf05dca2f9bd842d3eb7ef8573d84372bca22e292,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192294909886,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928,PodSandboxId:e805190624769bd65b4d507db9cd4890147f025c40fb5ed9fa66e5138bd21449,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721930192253870381,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kubernetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb36d42911bccb0a32406d97abde18eb45a7e0b2bf1c59f2b9a422469e1cf3,PodSandboxId:c949824afb5f45d9711f19876dbefc9d996099cccbc388738d097e91b340ae4a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721929705860309177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2mwrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e874d68f-5f06-44af-882d-fb479da5a101,},Annot
ations:map[string]string{io.kubernetes.container.hash: dad7bfa1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f,PodSandboxId:9bb7062a78b838eadc4cbc20a5daddd8c2e7e803bfd2da9c487dba8f1da76b0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571452674649,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-flblg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94857bc1-d7ba-466b-91d7-e2d5041159f2,},Annotations:map[string]string{io.kube
rnetes.container.hash: b4e72978,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f,PodSandboxId:77a88d259037c02ac77ad493d6173b07b29e0eaa4f9bd92a10389a4fd57c3280,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721929571379738863,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vtr9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7a6c22-ba2b-44c7-a46c-6227fcd4e89a,},Annotations:map[string]string{io.kubernetes.container.hash: b1cb2afc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad,PodSandboxId:08e5a1f0a23d2160b3314c2ac2a7cb790749fdf259fe417c0b8b7215f56873d5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721929559436879718,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2c2n8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ed79cb-52d7-4dfa-a3a0-02329169d86c,},Annotations:map[string]string{io.kubernetes.container.hash: aba6e72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136,PodSandboxId:c399536e97e26365f11796212491808e6e2ea097783389aea26af720489136f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721929555002607030,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6jdn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f13b463b-f7f9-4b49-8e29-209cb153a6e6,},Annotations:map[string]string{io.kubernetes.container.hash: 18398de2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9,PodSandboxId:18925eee7f455b7d6edac111000b68d5a7c0540c75e9f4c303f1045c0d074c95,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721929534688765221,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243af717eadb4d61aadfedd2ed2a3083,},Annotations:map[string]string{io.kubernetes.container.hash: 42e108a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002,PodSandboxId:792a8f45313d0d458ca4530da219308841bfa0805526bd53c725b5058370a264,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1721929534601497054,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b29243a17ab88a279707af48677c8a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43396d7a-5c11-4e6c-954b-6cacd9c07f53 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6a8e16cbbcba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   964397aa7270b       storage-provisioner
	01492692f4a55       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   a3172a2fbe2ec       kube-apiserver-ha-174036
	693ed1ff9eb4b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   2                   8941cb3be23d3       kube-controller-manager-ha-174036
	073042a011f3b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   6cd3158807dfc       busybox-fc5497c4f-2mwrb
	274822bb9a65e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   964397aa7270b       storage-provisioner
	644aa08443079       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   3eeb4cb8ff52c       kube-vip-ha-174036
	c7df72b32c957       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   444d87aef2ed9       kube-proxy-s6jdn
	7a99835be7737       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   b74c45a0fd7eb       kindnet-2c2n8
	e23512c087fe5       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   8941cb3be23d3       kube-controller-manager-ha-174036
	3a314f987a525       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   a3172a2fbe2ec       kube-apiserver-ha-174036
	7bbd7762992c8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   db06f08e58b8d       kube-scheduler-ha-174036
	0a0824c06b1fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   f73c39a23d513       etcd-ha-174036
	e26472c1f859c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   1                   89cb3bf14d473       coredns-7db6d8ff4d-vtr9p
	3a13f2f605cb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   1                   e805190624769       coredns-7db6d8ff4d-flblg
	2bbb36d42911b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   c949824afb5f4       busybox-fc5497c4f-2mwrb
	0110c72f3cc1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   9bb7062a78b83       coredns-7db6d8ff4d-flblg
	7faf8fe41b978       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   77a88d259037c       coredns-7db6d8ff4d-vtr9p
	fe8ee70c5b693       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   08e5a1f0a23d2       kindnet-2c2n8
	3afce6c1101d6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   c399536e97e26       kube-proxy-s6jdn
	5de803e0d40d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   18925eee7f455       etcd-ha-174036
	fe2d3acd60c40       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   792a8f45313d0       kube-scheduler-ha-174036
	
	
	==> coredns [0110c72f3cc1acbe6f07147564092df281aeb6888868b6dc5a643dda02be8c3f] <==
	[INFO] 10.244.2.2:51951 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060263s
	[INFO] 10.244.0.4:35903 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122708s
	[INFO] 10.244.0.4:47190 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168947s
	[INFO] 10.244.2.2:57705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000173851s
	[INFO] 10.244.1.2:46849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111229s
	[INFO] 10.244.1.2:45248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080498s
	[INFO] 10.244.1.2:34246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112642s
	[INFO] 10.244.1.2:60449 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082776s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1854&timeout=9m38s&timeoutSeconds=578&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1839&timeout=9m50s&timeoutSeconds=590&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1834&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1834": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1834": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1839": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1839": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1854": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1854": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3a13f2f605cb5c57b2abb7562ed4ce6d071d6756a44d0d330d1c27b6f8846928] <==
	[INFO] plugin/kubernetes: Trace[802125844]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:56:41.530) (total time: 10001ms):
	Trace[802125844]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:56:51.531)
	Trace[802125844]: [10.001639272s] [10.001639272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1181883564]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:56:46.430) (total time: 10000ms):
	Trace[1181883564]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (17:56:56.431)
	Trace[1181883564]: [10.000956393s] [10.000956393s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7faf8fe41b97854eec7a23d14c6a5cff5b70504bd0744ff01fab7d7082db873f] <==
	[INFO] 10.244.2.2:46828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001959216s
	[INFO] 10.244.2.2:50785 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205115s
	[INFO] 10.244.2.2:60376 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000134751s
	[INFO] 10.244.2.2:42181 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000185565s
	[INFO] 10.244.1.2:33441 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154369s
	[INFO] 10.244.1.2:48932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095106s
	[INFO] 10.244.1.2:57921 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014197s
	[INFO] 10.244.1.2:36171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087145s
	[INFO] 10.244.0.4:34307 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088823s
	[INFO] 10.244.0.4:57061 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114297s
	[INFO] 10.244.2.2:54914 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000215592s
	[INFO] 10.244.1.2:41895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148191s
	[INFO] 10.244.1.2:43543 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125877s
	[INFO] 10.244.1.2:60822 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099959s
	[INFO] 10.244.1.2:55371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085133s
	[INFO] 10.244.0.4:60792 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135863s
	[INFO] 10.244.0.4:34176 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000198465s
	[INFO] 10.244.2.2:48933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196507s
	[INFO] 10.244.2.2:49323 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179955s
	[INFO] 10.244.2.2:55358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000098973s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e26472c1f859c4d8b3453c7ea285bff46d404bbd55c288b432904fc54c79c7f4] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58666->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58666->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-174036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T17_45_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:02:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:02:26 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:02:26 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:02:26 +0000   Thu, 25 Jul 2024 17:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:02:26 +0000   Thu, 25 Jul 2024 17:46:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-174036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1be020ed9784dbcb9721764c32b616e
	  System UUID:                a1be020e-d978-4dbc-b972-1764c32b616e
	  Boot ID:                    96d25b24-9958-4e84-b55d-0be006e0dab8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2mwrb              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-flblg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vtr9p             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-174036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-2c2n8                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-174036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-174036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-s6jdn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-174036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-174036                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 5m14s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-174036 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-174036 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-174036 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-174036 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           14m    node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Warning  ContainerGCFailed        6m54s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m8s   node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           4m57s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	  Normal   RegisteredNode           3m10s  node-controller  Node ha-174036 event: Registered Node ha-174036 in Controller
	
	
	Name:               ha-174036-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_46_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:46:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:02:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:58:00 +0000   Thu, 25 Jul 2024 17:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.197
	  Hostname:    ha-174036-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8093ac6d205c434d94cbb70f3b2823ae
	  System UUID:                8093ac6d-205c-434d-94cb-b70f3b2823ae
	  Boot ID:                    4309dfee-41d4-42dc-a2fc-9f32b8231986
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wtxzv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-174036-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-k4d8x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-174036-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-174036-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-xwvdm                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-174036-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-174036-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-174036-m02 status is now: NodeNotReady
	  Normal  Starting                 5m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m42s (x8 over 5m42s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m42s (x8 over 5m42s)  kubelet          Node ha-174036-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m42s (x7 over 5m42s)  kubelet          Node ha-174036-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-174036-m02 event: Registered Node ha-174036-m02 in Controller
	
	
	Name:               ha-174036-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174036-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=ha-174036
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T17_49_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:48:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174036-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:00:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 18:00:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 18:00:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 18:00:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 17:59:47 +0000   Thu, 25 Jul 2024 18:00:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-174036-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccffe731755d4ecfa1441a8d697922a2
	  System UUID:                ccffe731-755d-4ecf-a144-1a8d697922a2
	  Boot ID:                    d6b478e5-59e0-40f8-95c9-ca04c497bf40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-flc88    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-bvhcw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-cvcj9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-174036-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m8s                   node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   NodeNotReady             4m28s                  node-controller  Node ha-174036-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-174036-m04 event: Registered Node ha-174036-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-174036-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-174036-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-174036-m04 has been rebooted, boot id: d6b478e5-59e0-40f8-95c9-ca04c497bf40
	  Normal   NodeReady                2m47s                  kubelet          Node ha-174036-m04 status is now: NodeReady
	  Normal   NodeNotReady             102s                   node-controller  Node ha-174036-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.777476] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055370] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056188] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.174852] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114710] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.260280] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.890204] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.211746] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064261] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251761] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.094069] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.327144] kauditd_printk_skb: 21 callbacks suppressed
	[Jul25 17:46] kauditd_printk_skb: 34 callbacks suppressed
	[ +46.764801] kauditd_printk_skb: 26 callbacks suppressed
	[Jul25 17:56] systemd-fstab-generator[3594]: Ignoring "noauto" option for root device
	[  +0.146485] systemd-fstab-generator[3606]: Ignoring "noauto" option for root device
	[  +0.166626] systemd-fstab-generator[3620]: Ignoring "noauto" option for root device
	[  +0.152973] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.263064] systemd-fstab-generator[3660]: Ignoring "noauto" option for root device
	[  +0.748032] systemd-fstab-generator[3761]: Ignoring "noauto" option for root device
	[  +7.209837] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.021578] kauditd_printk_skb: 65 callbacks suppressed
	[Jul25 17:57] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.034087] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [0a0824c06b1faa0d3a493f79faa86d05f7aa41c628058a86915f6f4240edfadb] <==
	{"level":"info","ts":"2024-07-25T17:59:07.112621Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"28eb4253c22010c1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-25T17:59:07.112717Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.121981Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:59:07.122892Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"28eb4253c22010c1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-25T17:59:07.123112Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.846263Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.253:58506","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-25T18:00:00.87388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(5618312471305538947 18429775660708452854)"}
	{"level":"info","ts":"2024-07-25T18:00:00.875814Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","removed-remote-peer-id":"28eb4253c22010c1","removed-remote-peer-urls":["https://192.168.39.253:2380"]}
	{"level":"info","ts":"2024-07-25T18:00:00.875909Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.876181Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T18:00:00.876235Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.876507Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T18:00:00.876549Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.876591Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"ffc3b7517aaad9f6","removed-member-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.876646Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-07-25T18:00:00.876897Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.877085Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1","error":"context canceled"}
	{"level":"warn","ts":"2024-07-25T18:00:00.877137Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"28eb4253c22010c1","error":"failed to read 28eb4253c22010c1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-25T18:00:00.877168Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.877271Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1","error":"context canceled"}
	{"level":"info","ts":"2024-07-25T18:00:00.877308Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T18:00:00.877327Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T18:00:00.877341Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ffc3b7517aaad9f6","removed-remote-peer-id":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.894598Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ffc3b7517aaad9f6","remote-peer-id-stream-handler":"ffc3b7517aaad9f6","remote-peer-id-from":"28eb4253c22010c1"}
	{"level":"warn","ts":"2024-07-25T18:00:00.898663Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ffc3b7517aaad9f6","remote-peer-id-stream-handler":"ffc3b7517aaad9f6","remote-peer-id-from":"28eb4253c22010c1"}
	
	
	==> etcd [5de803e0d40d9bb8f191c424d137a32126b1146176df9cba8826a5d24c5a39b9] <==
	2024/07/25 17:54:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:54:58.063709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.326849552s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-25T17:54:58.063742Z","caller":"traceutil/trace.go:171","msg":"trace[522483704] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"1.326904117s","start":"2024-07-25T17:54:56.736833Z","end":"2024-07-25T17:54:58.063738Z","steps":["trace[522483704] 'agreement among raft nodes before linearized reading'  (duration: 1.326866697s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T17:54:58.063886Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T17:54:56.736823Z","time spent":"1.327053279s","remote":"127.0.0.1:45344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:500 "}
	2024/07/25 17:54:58 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-25T17:54:58.128671Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:54:58.128756Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T17:54:58.128885Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-25T17:54:58.129074Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129123Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129157Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129224Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.12933Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129389Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129402Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4df8416cf1504d83"}
	{"level":"info","ts":"2024-07-25T17:54:58.129408Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129417Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.12947Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129555Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129581Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129633Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.129656Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"28eb4253c22010c1"}
	{"level":"info","ts":"2024-07-25T17:54:58.132187Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-07-25T17:54:58.132354Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-07-25T17:54:58.13239Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-174036","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 18:02:35 up 17 min,  0 users,  load average: 0.29, 0.36, 0.29
	Linux ha-174036 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7a99835be7737720b17b3e6782c31ce85b5f6874ca897acfa6ea193b3ae2c944] <==
	I0725 18:01:49.673011       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 18:01:59.673436       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 18:01:59.673464       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 18:01:59.673633       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 18:01:59.673654       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 18:01:59.673702       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 18:01:59.673707       1 main.go:299] handling current node
	I0725 18:02:09.672746       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 18:02:09.672944       1 main.go:299] handling current node
	I0725 18:02:09.672965       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 18:02:09.672971       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 18:02:09.673118       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 18:02:09.673143       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 18:02:19.672876       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 18:02:19.673046       1 main.go:299] handling current node
	I0725 18:02:19.673094       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 18:02:19.673123       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 18:02:19.673514       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 18:02:19.673558       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 18:02:29.672878       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 18:02:29.673059       1 main.go:299] handling current node
	I0725 18:02:29.673094       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 18:02:29.673113       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 18:02:29.673266       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 18:02:29.673287       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fe8ee70c5b693996ad35a1c33d99144b8d4733584b8f93cf0c07eeb371816bad] <==
	I0725 17:54:30.455506       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:54:30.455592       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:30.455611       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:40.453888       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:40.453977       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:40.454318       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:54:40.454342       1 main.go:299] handling current node
	I0725 17:54:40.454354       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:54:40.454360       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	I0725 17:54:40.454413       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:54:40.454418       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	E0725 17:54:43.135441       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1854&timeout=6m8s&timeoutSeconds=368&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0725 17:54:50.456258       1 main.go:295] Handling node with IPs: map[192.168.39.253:{}]
	I0725 17:54:50.456350       1 main.go:322] Node ha-174036-m03 has CIDR [10.244.2.0/24] 
	I0725 17:54:50.456518       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0725 17:54:50.456541       1 main.go:322] Node ha-174036-m04 has CIDR [10.244.3.0/24] 
	I0725 17:54:50.456721       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0725 17:54:50.456745       1 main.go:299] handling current node
	I0725 17:54:50.456761       1 main.go:295] Handling node with IPs: map[192.168.39.197:{}]
	I0725 17:54:50.456810       1 main.go:322] Node ha-174036-m02 has CIDR [10.244.1.0/24] 
	W0725 17:54:55.871417       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I0725 17:54:55.875166       1 trace.go:236] Trace[1073771757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232 (25-Jul-2024 17:54:44.077) (total time: 11794ms):
	Trace[1073771757]: ---"Objects listed" error:Unauthorized 11794ms (17:54:55.871)
	Trace[1073771757]: [11.794158331s] [11.794158331s] END
	E0725 17:54:55.875231       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [01492692f4a55c16ac602bfa2f3f14b9ac44b0e6f7e146a055b5098d353fc765] <==
	I0725 17:57:24.518240       1 naming_controller.go:291] Starting NamingConditionController
	I0725 17:57:24.518271       1 establishing_controller.go:76] Starting EstablishingController
	I0725 17:57:24.518296       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 17:57:24.582329       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 17:57:24.592707       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:57:24.592819       1 policy_source.go:224] refreshing policies
	I0725 17:57:24.602995       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 17:57:24.609893       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 17:57:24.609951       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 17:57:24.611558       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 17:57:24.611675       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 17:57:24.611708       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 17:57:24.614877       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 17:57:24.618139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 17:57:24.618936       1 aggregator.go:165] initial CRD sync complete...
	I0725 17:57:24.619033       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 17:57:24.619060       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 17:57:24.619141       1 cache.go:39] Caches are synced for autoregister controller
	I0725 17:57:24.631762       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0725 17:57:24.648507       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.197 192.168.39.253]
	I0725 17:57:24.650453       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 17:57:24.664866       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0725 17:57:24.668527       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0725 17:57:25.514983       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0725 17:57:25.893826       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.197 192.168.39.253]
	
	
	==> kube-apiserver [3a314f987a5258293be50a13db0a871cf04b614ced50e042107a35bacb471c4b] <==
	I0725 17:56:39.008019       1 options.go:221] external host was not specified, using 192.168.39.165
	I0725 17:56:39.009048       1 server.go:148] Version: v1.30.3
	I0725 17:56:39.009097       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:56:39.497426       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0725 17:56:39.499862       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 17:56:39.504327       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0725 17:56:39.504359       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0725 17:56:39.504561       1 instance.go:299] Using reconciler: lease
	W0725 17:56:59.497250       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0725 17:56:59.497251       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0725 17:56:59.509994       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0725 17:56:59.510001       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [693ed1ff9eb4b980ff521ff1ea90e641998feda2f306358c9445e6f40a662013] <==
	E0725 18:00:36.935469       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:36.935481       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:36.935488       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:36.935494       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	I0725 18:00:52.158185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.962568ms"
	I0725 18:00:52.158398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.985µs"
	E0725 18:00:56.936456       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:56.936500       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:56.936507       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:56.936512       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	E0725 18:00:56.936517       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174036-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174036-m03"
	I0725 18:00:56.949270       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-174036-m03"
	I0725 18:00:56.979670       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-174036-m03"
	I0725 18:00:56.980196       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-174036-m03"
	I0725 18:00:57.038329       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-174036-m03"
	I0725 18:00:57.038370       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-174036-m03"
	I0725 18:00:57.077040       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-174036-m03"
	I0725 18:00:57.077076       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fcznc"
	I0725 18:00:57.112854       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fcznc"
	I0725 18:00:57.113045       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-174036-m03"
	I0725 18:00:57.151355       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-174036-m03"
	I0725 18:00:57.151398       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-174036-m03"
	I0725 18:00:57.183685       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-174036-m03"
	I0725 18:00:57.183805       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5klkv"
	I0725 18:00:57.217614       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-5klkv"
	
	
	==> kube-controller-manager [e23512c087fe58c00af7b1190ac55bfb5a024fbe96400cccfe2fb3c3072b5ca4] <==
	I0725 17:56:39.856531       1 serving.go:380] Generated self-signed cert in-memory
	I0725 17:56:40.115589       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0725 17:56:40.115678       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:56:40.117847       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0725 17:56:40.117987       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0725 17:56:40.118142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0725 17:56:40.118230       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0725 17:57:00.518687       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.165:8443/healthz\": dial tcp 192.168.39.165:8443: connect: connection refused"
	
	
	==> kube-proxy [3afce6c1101d6747621b98df87dd0fa2fe24300305c0235373c8ebb29b823136] <==
	E0725 17:53:44.896303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:47.967330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:47.967469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.239402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.239626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:53:54.239831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:53:54.241385       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:03.456925       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:03.457534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:06.528848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:06.528924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:06.528830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:06.529112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:18.815440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:18.815495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1770": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:31.104598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:31.104851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174036&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0725 17:54:31.104931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	E0725 17:54:31.104986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1853": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [c7df72b32c957115d84bdfba24cadbbef9d124e077e0b3235b6292ac160093aa] <==
	I0725 17:56:39.909521       1 server_linux.go:69] "Using iptables proxy"
	E0725 17:56:40.128686       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:43.199215       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:46.272103       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:56:52.416263       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0725 17:57:01.631216       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174036\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0725 17:57:19.848817       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0725 17:57:19.940945       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:57:19.941033       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:57:19.941058       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:57:19.955288       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:57:19.956117       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:57:19.956202       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:57:19.958401       1 config.go:192] "Starting service config controller"
	I0725 17:57:19.958455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:57:19.958506       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:57:19.958527       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:57:19.959300       1 config.go:319] "Starting node config controller"
	I0725 17:57:19.959338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:57:20.058739       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 17:57:20.058955       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:57:20.060724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7bbd7762992c8f6c65c1db489cdbc5e30de5e522cb55ef194c2957dc6c00506a] <==
	W0725 17:57:16.896690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.165:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:16.896890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.165:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:17.503933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:17.504001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:19.231282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:19.231373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:19.649359       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.165:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:19.649429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.165:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:20.058376       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:20.058433       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:20.786737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:20.786932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:21.696353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.165:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:21.696471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.165:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	W0725 17:57:22.352374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.165:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0725 17:57:22.352439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.165:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	I0725 17:57:36.627164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 17:59:57.593228       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gzqsr\": pod busybox-fc5497c4f-gzqsr is already assigned to node \"ha-174036-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-gzqsr" node="ha-174036-m04"
	E0725 17:59:57.593386       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod cf2f864c-a1d1-4e34-8b17-90766136763d(default/busybox-fc5497c4f-gzqsr) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-gzqsr"
	E0725 17:59:57.593430       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-gzqsr\": pod busybox-fc5497c4f-gzqsr is already assigned to node \"ha-174036-m04\"" pod="default/busybox-fc5497c4f-gzqsr"
	I0725 17:59:57.593457       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-gzqsr" node="ha-174036-m04"
	E0725 17:59:58.851713       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-flc88\": pod busybox-fc5497c4f-flc88 is already assigned to node \"ha-174036-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-flc88" node="ha-174036-m04"
	E0725 17:59:58.851897       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fde27264-5c35-4626-8188-f1470e2c3f05(default/busybox-fc5497c4f-flc88) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-flc88"
	E0725 17:59:58.851939       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-flc88\": pod busybox-fc5497c4f-flc88 is already assigned to node \"ha-174036-m04\"" pod="default/busybox-fc5497c4f-flc88"
	I0725 17:59:58.851983       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-flc88" node="ha-174036-m04"
	
	
	==> kube-scheduler [fe2d3acd60c408eb2379c9a02f34202558a6862a043a1058b24ca1c683e39002] <==
	E0725 17:54:50.183128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 17:54:50.462190       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:50.462279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:50.733222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 17:54:50.733363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 17:54:50.761711       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 17:54:50.761888       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 17:54:50.935891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 17:54:50.936043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 17:54:51.973387       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:51.973492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:52.038407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 17:54:52.038533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 17:54:52.289344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 17:54:52.289451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 17:54:52.688718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 17:54:52.688958       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 17:54:52.890362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:52.890491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 17:54:57.987090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 17:54:57.987123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0725 17:54:58.019657       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 17:54:58.019972       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0725 17:54:58.020142       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 17:54:58.020379       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 25 17:57:45 ha-174036 kubelet[1362]: E0725 17:57:45.807568    1362 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c9354422-69ff-4676-80d1-4940badf9b4e)\"" pod="kube-system/storage-provisioner" podUID="c9354422-69ff-4676-80d1-4940badf9b4e"
	Jul 25 17:57:58 ha-174036 kubelet[1362]: I0725 17:57:58.807712    1362 scope.go:117] "RemoveContainer" containerID="274822bb9a65e24f7616f5b4de9b9fa662151e985e3d0f78910e3fef62d39963"
	Jul 25 17:58:10 ha-174036 kubelet[1362]: I0725 17:58:10.808166    1362 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-174036" podUID="2ce4bfe5-5441-4a28-889e-7743367f32b2"
	Jul 25 17:58:10 ha-174036 kubelet[1362]: I0725 17:58:10.837984    1362 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-174036"
	Jul 25 17:58:20 ha-174036 kubelet[1362]: I0725 17:58:20.826841    1362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-174036" podStartSLOduration=10.826748309 podStartE2EDuration="10.826748309s" podCreationTimestamp="2024-07-25 17:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-25 17:58:20.826291809 +0000 UTC m=+760.178563911" watchObservedRunningTime="2024-07-25 17:58:20.826748309 +0000 UTC m=+760.179020410"
	Jul 25 17:58:40 ha-174036 kubelet[1362]: E0725 17:58:40.851874    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:58:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:58:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 17:59:40 ha-174036 kubelet[1362]: E0725 17:59:40.856128    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 17:59:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 17:59:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 18:00:40 ha-174036 kubelet[1362]: E0725 18:00:40.850629    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:00:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:00:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:00:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:00:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 18:01:40 ha-174036 kubelet[1362]: E0725 18:01:40.853306    1362 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:01:40 ha-174036 kubelet[1362]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:01:40 ha-174036 kubelet[1362]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:01:40 ha-174036 kubelet[1362]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:01:40 ha-174036 kubelet[1362]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:02:34.075854   32524 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19326-5877/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174036 -n ha-174036
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-253131
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-253131
E0725 18:19:12.058286   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-253131: exit status 82 (2m1.763148297s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-253131-m03"  ...
	* Stopping node "multinode-253131-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-253131" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-253131 --wait=true -v=8 --alsologtostderr
E0725 18:21:58.589876   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 18:22:15.101164   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-253131 --wait=true -v=8 --alsologtostderr: (3m20.173585685s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-253131
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-253131 -n multinode-253131
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-253131 logs -n 25: (1.387836239s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131:/home/docker/cp-test_multinode-253131-m02_multinode-253131.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131 sudo cat                                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m02_multinode-253131.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03:/home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131-m03 sudo cat                                   | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp testdata/cp-test.txt                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131:/home/docker/cp-test_multinode-253131-m03_multinode-253131.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131 sudo cat                                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02:/home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131-m02 sudo cat                                   | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-253131 node stop m03                                                          | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	| node    | multinode-253131 node start                                                             | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| stop    | -p multinode-253131                                                                     | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| start   | -p multinode-253131                                                                     | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:19 UTC | 25 Jul 24 18:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:19:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:19:15.086550   42258 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:19:15.086824   42258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:19:15.086834   42258 out.go:304] Setting ErrFile to fd 2...
	I0725 18:19:15.086839   42258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:19:15.086983   42258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:19:15.087462   42258 out.go:298] Setting JSON to false
	I0725 18:19:15.088390   42258 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3699,"bootTime":1721927856,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:19:15.088444   42258 start.go:139] virtualization: kvm guest
	I0725 18:19:15.090373   42258 out.go:177] * [multinode-253131] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:19:15.091748   42258 notify.go:220] Checking for updates...
	I0725 18:19:15.091752   42258 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:19:15.092976   42258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:19:15.094159   42258 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:19:15.095218   42258 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:19:15.096280   42258 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:19:15.097455   42258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:19:15.098900   42258 config.go:182] Loaded profile config "multinode-253131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:19:15.099003   42258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:19:15.099436   42258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:19:15.099485   42258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:19:15.114394   42258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I0725 18:19:15.114849   42258 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:19:15.115509   42258 main.go:141] libmachine: Using API Version  1
	I0725 18:19:15.115546   42258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:19:15.115874   42258 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:19:15.116075   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.152069   42258 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:19:15.153232   42258 start.go:297] selected driver: kvm2
	I0725 18:19:15.153246   42258 start.go:901] validating driver "kvm2" against &{Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:19:15.153392   42258 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:19:15.153724   42258 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:19:15.153793   42258 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:19:15.168318   42258 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:19:15.168955   42258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:19:15.169012   42258 cni.go:84] Creating CNI manager for ""
	I0725 18:19:15.169023   42258 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0725 18:19:15.169124   42258 start.go:340] cluster config:
	{Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:19:15.169319   42258 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:19:15.170774   42258 out.go:177] * Starting "multinode-253131" primary control-plane node in "multinode-253131" cluster
	I0725 18:19:15.171798   42258 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:19:15.171828   42258 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:19:15.171835   42258 cache.go:56] Caching tarball of preloaded images
	I0725 18:19:15.171921   42258 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:19:15.171932   42258 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:19:15.172039   42258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/config.json ...
	I0725 18:19:15.172219   42258 start.go:360] acquireMachinesLock for multinode-253131: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:19:15.172262   42258 start.go:364] duration metric: took 26.613µs to acquireMachinesLock for "multinode-253131"
	I0725 18:19:15.172275   42258 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:19:15.172281   42258 fix.go:54] fixHost starting: 
	I0725 18:19:15.172642   42258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:19:15.172674   42258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:19:15.187004   42258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0725 18:19:15.187488   42258 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:19:15.188050   42258 main.go:141] libmachine: Using API Version  1
	I0725 18:19:15.188073   42258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:19:15.188455   42258 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:19:15.188666   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.188864   42258 main.go:141] libmachine: (multinode-253131) Calling .GetState
	I0725 18:19:15.190705   42258 fix.go:112] recreateIfNeeded on multinode-253131: state=Running err=<nil>
	W0725 18:19:15.190743   42258 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:19:15.192604   42258 out.go:177] * Updating the running kvm2 "multinode-253131" VM ...
	I0725 18:19:15.193868   42258 machine.go:94] provisionDockerMachine start ...
	I0725 18:19:15.193896   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.194104   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.196825   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.197356   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.197382   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.197451   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.197618   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.197790   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.197936   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.198135   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.198364   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.198379   42258 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:19:15.313406   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-253131
	
	I0725 18:19:15.313438   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.313722   42258 buildroot.go:166] provisioning hostname "multinode-253131"
	I0725 18:19:15.313764   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.313990   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.316786   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.317196   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.317220   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.317366   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.317554   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.317722   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.317882   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.318040   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.318225   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.318242   42258 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-253131 && echo "multinode-253131" | sudo tee /etc/hostname
	I0725 18:19:15.438825   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-253131
	
	I0725 18:19:15.438858   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.441856   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.442269   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.442298   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.442484   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.442693   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.442862   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.443010   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.443161   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.443320   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.443336   42258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-253131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-253131/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-253131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:19:15.553442   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:19:15.553472   42258 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:19:15.553496   42258 buildroot.go:174] setting up certificates
	I0725 18:19:15.553504   42258 provision.go:84] configureAuth start
	I0725 18:19:15.553512   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.553819   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:19:15.556453   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.556907   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.556949   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.557104   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.559407   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.559746   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.559778   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.559953   42258 provision.go:143] copyHostCerts
	I0725 18:19:15.559979   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:19:15.560010   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:19:15.560021   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:19:15.560102   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:19:15.560212   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:19:15.560235   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:19:15.560244   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:19:15.560284   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:19:15.560364   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:19:15.560389   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:19:15.560398   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:19:15.560430   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:19:15.560497   42258 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.multinode-253131 san=[127.0.0.1 192.168.39.54 localhost minikube multinode-253131]
	I0725 18:19:15.819885   42258 provision.go:177] copyRemoteCerts
	I0725 18:19:15.819947   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:19:15.819969   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.822753   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.823062   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.823084   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.823246   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.823444   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.823622   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.823836   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:19:15.907092   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 18:19:15.907176   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:19:15.930803   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 18:19:15.930862   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0725 18:19:15.955192   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 18:19:15.955252   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:19:15.978798   42258 provision.go:87] duration metric: took 425.282174ms to configureAuth
	I0725 18:19:15.978831   42258 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:19:15.979099   42258 config.go:182] Loaded profile config "multinode-253131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:19:15.979161   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.981798   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.982245   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.982272   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.982446   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.982668   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.982831   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.982973   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.983163   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.983364   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.983384   42258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:20:46.827634   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:20:46.827663   42258 machine.go:97] duration metric: took 1m31.633778148s to provisionDockerMachine
	I0725 18:20:46.827679   42258 start.go:293] postStartSetup for "multinode-253131" (driver="kvm2")
	I0725 18:20:46.827690   42258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:20:46.827705   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:46.827984   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:20:46.828007   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:46.831114   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.831514   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:46.831533   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.831688   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:46.831908   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.832090   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:46.832261   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:46.919844   42258 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:20:46.923850   42258 command_runner.go:130] > NAME=Buildroot
	I0725 18:20:46.923874   42258 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0725 18:20:46.923881   42258 command_runner.go:130] > ID=buildroot
	I0725 18:20:46.923888   42258 command_runner.go:130] > VERSION_ID=2023.02.9
	I0725 18:20:46.923902   42258 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0725 18:20:46.923944   42258 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:20:46.923959   42258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:20:46.924021   42258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:20:46.924140   42258 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:20:46.924155   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 18:20:46.924252   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:20:46.933391   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:20:46.955632   42258 start.go:296] duration metric: took 127.938817ms for postStartSetup
	I0725 18:20:46.955676   42258 fix.go:56] duration metric: took 1m31.783393669s for fixHost
	I0725 18:20:46.955709   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:46.958350   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.958763   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:46.958787   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.958994   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:46.959235   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.959468   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.959636   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:46.959827   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:20:46.960003   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:20:46.960025   42258 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:20:47.068825   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721931647.042399081
	
	I0725 18:20:47.068855   42258 fix.go:216] guest clock: 1721931647.042399081
	I0725 18:20:47.068885   42258 fix.go:229] Guest: 2024-07-25 18:20:47.042399081 +0000 UTC Remote: 2024-07-25 18:20:46.955680646 +0000 UTC m=+91.903510165 (delta=86.718435ms)
	I0725 18:20:47.068949   42258 fix.go:200] guest clock delta is within tolerance: 86.718435ms
	I0725 18:20:47.068961   42258 start.go:83] releasing machines lock for "multinode-253131", held for 1m31.89668886s
	I0725 18:20:47.068991   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.069258   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:20:47.072177   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.072797   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.072829   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.073080   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073609   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073798   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073871   42258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:20:47.073912   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:47.074005   42258 ssh_runner.go:195] Run: cat /version.json
	I0725 18:20:47.074019   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:47.076802   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.076863   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077186   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.077210   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077236   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.077252   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077348   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:47.077539   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:47.077668   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:47.077833   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:47.077845   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:47.077974   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:47.078031   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:47.078239   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:47.189230   42258 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0725 18:20:47.189279   42258 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0725 18:20:47.189389   42258 ssh_runner.go:195] Run: systemctl --version
	I0725 18:20:47.195023   42258 command_runner.go:130] > systemd 252 (252)
	I0725 18:20:47.195054   42258 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0725 18:20:47.195243   42258 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:20:47.354449   42258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 18:20:47.362316   42258 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0725 18:20:47.362447   42258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:20:47.362509   42258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:20:47.371751   42258 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 18:20:47.371773   42258 start.go:495] detecting cgroup driver to use...
	I0725 18:20:47.371838   42258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:20:47.387333   42258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:20:47.401637   42258 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:20:47.401702   42258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:20:47.415261   42258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:20:47.428015   42258 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:20:47.564805   42258 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:20:47.698067   42258 docker.go:233] disabling docker service ...
	I0725 18:20:47.698143   42258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:20:47.713851   42258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:20:47.726757   42258 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:20:47.861369   42258 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:20:47.993722   42258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:20:48.006720   42258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:20:48.025342   42258 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0725 18:20:48.025998   42258 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:20:48.026066   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.036062   42258 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:20:48.036124   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.046591   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.056392   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.066083   42258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:20:48.075927   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.086906   42258 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.097952   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.107491   42258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:20:48.116015   42258 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0725 18:20:48.116086   42258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:20:48.124719   42258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:20:48.258031   42258 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:20:49.296213   42258 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.038143083s)
	I0725 18:20:49.296245   42258 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:20:49.296341   42258 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:20:49.301442   42258 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0725 18:20:49.301461   42258 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0725 18:20:49.301467   42258 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0725 18:20:49.301475   42258 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0725 18:20:49.301483   42258 command_runner.go:130] > Access: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301492   42258 command_runner.go:130] > Modify: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301500   42258 command_runner.go:130] > Change: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301505   42258 command_runner.go:130] >  Birth: -
	I0725 18:20:49.301523   42258 start.go:563] Will wait 60s for crictl version
	I0725 18:20:49.301573   42258 ssh_runner.go:195] Run: which crictl
	I0725 18:20:49.305125   42258 command_runner.go:130] > /usr/bin/crictl
	I0725 18:20:49.305276   42258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:20:49.343228   42258 command_runner.go:130] > Version:  0.1.0
	I0725 18:20:49.343249   42258 command_runner.go:130] > RuntimeName:  cri-o
	I0725 18:20:49.343256   42258 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0725 18:20:49.343263   42258 command_runner.go:130] > RuntimeApiVersion:  v1
	I0725 18:20:49.343363   42258 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:20:49.343465   42258 ssh_runner.go:195] Run: crio --version
	I0725 18:20:49.369228   42258 command_runner.go:130] > crio version 1.29.1
	I0725 18:20:49.369254   42258 command_runner.go:130] > Version:        1.29.1
	I0725 18:20:49.369269   42258 command_runner.go:130] > GitCommit:      unknown
	I0725 18:20:49.369276   42258 command_runner.go:130] > GitCommitDate:  unknown
	I0725 18:20:49.369283   42258 command_runner.go:130] > GitTreeState:   clean
	I0725 18:20:49.369291   42258 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0725 18:20:49.369298   42258 command_runner.go:130] > GoVersion:      go1.21.6
	I0725 18:20:49.369306   42258 command_runner.go:130] > Compiler:       gc
	I0725 18:20:49.369312   42258 command_runner.go:130] > Platform:       linux/amd64
	I0725 18:20:49.369320   42258 command_runner.go:130] > Linkmode:       dynamic
	I0725 18:20:49.369328   42258 command_runner.go:130] > BuildTags:      
	I0725 18:20:49.369335   42258 command_runner.go:130] >   containers_image_ostree_stub
	I0725 18:20:49.369345   42258 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0725 18:20:49.369351   42258 command_runner.go:130] >   btrfs_noversion
	I0725 18:20:49.369359   42258 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0725 18:20:49.369369   42258 command_runner.go:130] >   libdm_no_deferred_remove
	I0725 18:20:49.369378   42258 command_runner.go:130] >   seccomp
	I0725 18:20:49.369385   42258 command_runner.go:130] > LDFlags:          unknown
	I0725 18:20:49.369395   42258 command_runner.go:130] > SeccompEnabled:   true
	I0725 18:20:49.369402   42258 command_runner.go:130] > AppArmorEnabled:  false
	I0725 18:20:49.370607   42258 ssh_runner.go:195] Run: crio --version
	I0725 18:20:49.396374   42258 command_runner.go:130] > crio version 1.29.1
	I0725 18:20:49.396394   42258 command_runner.go:130] > Version:        1.29.1
	I0725 18:20:49.396401   42258 command_runner.go:130] > GitCommit:      unknown
	I0725 18:20:49.396408   42258 command_runner.go:130] > GitCommitDate:  unknown
	I0725 18:20:49.396414   42258 command_runner.go:130] > GitTreeState:   clean
	I0725 18:20:49.396422   42258 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0725 18:20:49.396428   42258 command_runner.go:130] > GoVersion:      go1.21.6
	I0725 18:20:49.396434   42258 command_runner.go:130] > Compiler:       gc
	I0725 18:20:49.396441   42258 command_runner.go:130] > Platform:       linux/amd64
	I0725 18:20:49.396448   42258 command_runner.go:130] > Linkmode:       dynamic
	I0725 18:20:49.396464   42258 command_runner.go:130] > BuildTags:      
	I0725 18:20:49.396473   42258 command_runner.go:130] >   containers_image_ostree_stub
	I0725 18:20:49.396480   42258 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0725 18:20:49.396486   42258 command_runner.go:130] >   btrfs_noversion
	I0725 18:20:49.396491   42258 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0725 18:20:49.396498   42258 command_runner.go:130] >   libdm_no_deferred_remove
	I0725 18:20:49.396501   42258 command_runner.go:130] >   seccomp
	I0725 18:20:49.396509   42258 command_runner.go:130] > LDFlags:          unknown
	I0725 18:20:49.396513   42258 command_runner.go:130] > SeccompEnabled:   true
	I0725 18:20:49.396521   42258 command_runner.go:130] > AppArmorEnabled:  false
	I0725 18:20:49.398594   42258 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:20:49.400370   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:20:49.403208   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:49.403615   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:49.403642   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:49.403861   42258 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:20:49.407841   42258 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0725 18:20:49.407941   42258 kubeadm.go:883] updating cluster {Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:20:49.408087   42258 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:20:49.408139   42258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:20:49.449920   42258 command_runner.go:130] > {
	I0725 18:20:49.449942   42258 command_runner.go:130] >   "images": [
	I0725 18:20:49.449946   42258 command_runner.go:130] >     {
	I0725 18:20:49.449954   42258 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0725 18:20:49.449960   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.449965   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0725 18:20:49.449971   42258 command_runner.go:130] >       ],
	I0725 18:20:49.449977   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450004   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0725 18:20:49.450019   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0725 18:20:49.450024   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450033   42258 command_runner.go:130] >       "size": "87165492",
	I0725 18:20:49.450040   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450046   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450056   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450065   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450072   42258 command_runner.go:130] >     },
	I0725 18:20:49.450080   42258 command_runner.go:130] >     {
	I0725 18:20:49.450090   42258 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0725 18:20:49.450100   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450109   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0725 18:20:49.450116   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450124   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450137   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0725 18:20:49.450146   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0725 18:20:49.450150   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450154   42258 command_runner.go:130] >       "size": "87174707",
	I0725 18:20:49.450161   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450173   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450179   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450183   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450189   42258 command_runner.go:130] >     },
	I0725 18:20:49.450192   42258 command_runner.go:130] >     {
	I0725 18:20:49.450200   42258 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0725 18:20:49.450204   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450209   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0725 18:20:49.450217   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450221   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450227   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0725 18:20:49.450236   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0725 18:20:49.450239   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450244   42258 command_runner.go:130] >       "size": "1363676",
	I0725 18:20:49.450250   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450255   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450261   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450265   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450272   42258 command_runner.go:130] >     },
	I0725 18:20:49.450277   42258 command_runner.go:130] >     {
	I0725 18:20:49.450288   42258 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0725 18:20:49.450294   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450299   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0725 18:20:49.450305   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450308   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450316   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0725 18:20:49.450326   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0725 18:20:49.450331   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450334   42258 command_runner.go:130] >       "size": "31470524",
	I0725 18:20:49.450338   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450342   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450346   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450351   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450354   42258 command_runner.go:130] >     },
	I0725 18:20:49.450360   42258 command_runner.go:130] >     {
	I0725 18:20:49.450366   42258 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0725 18:20:49.450373   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450378   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0725 18:20:49.450384   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450387   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450397   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0725 18:20:49.450403   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0725 18:20:49.450409   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450413   42258 command_runner.go:130] >       "size": "61245718",
	I0725 18:20:49.450417   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450421   42258 command_runner.go:130] >       "username": "nonroot",
	I0725 18:20:49.450428   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450432   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450438   42258 command_runner.go:130] >     },
	I0725 18:20:49.450442   42258 command_runner.go:130] >     {
	I0725 18:20:49.450448   42258 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0725 18:20:49.450452   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450457   42258 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0725 18:20:49.450463   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450467   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450476   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0725 18:20:49.450482   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0725 18:20:49.450486   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450491   42258 command_runner.go:130] >       "size": "150779692",
	I0725 18:20:49.450496   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450500   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450503   42258 command_runner.go:130] >       },
	I0725 18:20:49.450507   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450513   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450521   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450526   42258 command_runner.go:130] >     },
	I0725 18:20:49.450530   42258 command_runner.go:130] >     {
	I0725 18:20:49.450535   42258 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0725 18:20:49.450539   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450545   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0725 18:20:49.450550   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450554   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450564   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0725 18:20:49.450572   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0725 18:20:49.450576   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450580   42258 command_runner.go:130] >       "size": "117609954",
	I0725 18:20:49.450597   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450603   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450606   42258 command_runner.go:130] >       },
	I0725 18:20:49.450610   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450614   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450618   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450622   42258 command_runner.go:130] >     },
	I0725 18:20:49.450626   42258 command_runner.go:130] >     {
	I0725 18:20:49.450631   42258 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0725 18:20:49.450638   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450643   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0725 18:20:49.450648   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450652   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450666   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0725 18:20:49.450675   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0725 18:20:49.450679   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450686   42258 command_runner.go:130] >       "size": "112198984",
	I0725 18:20:49.450690   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450693   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450696   42258 command_runner.go:130] >       },
	I0725 18:20:49.450700   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450704   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450708   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450712   42258 command_runner.go:130] >     },
	I0725 18:20:49.450715   42258 command_runner.go:130] >     {
	I0725 18:20:49.450720   42258 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0725 18:20:49.450724   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450729   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0725 18:20:49.450733   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450736   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450743   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0725 18:20:49.450749   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0725 18:20:49.450754   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450758   42258 command_runner.go:130] >       "size": "85953945",
	I0725 18:20:49.450762   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450765   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450769   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450772   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450775   42258 command_runner.go:130] >     },
	I0725 18:20:49.450778   42258 command_runner.go:130] >     {
	I0725 18:20:49.450784   42258 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0725 18:20:49.450788   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450792   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0725 18:20:49.450795   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450798   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450805   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0725 18:20:49.450812   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0725 18:20:49.450815   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450819   42258 command_runner.go:130] >       "size": "63051080",
	I0725 18:20:49.450822   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450825   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450828   42258 command_runner.go:130] >       },
	I0725 18:20:49.450831   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450836   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450839   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450843   42258 command_runner.go:130] >     },
	I0725 18:20:49.450846   42258 command_runner.go:130] >     {
	I0725 18:20:49.450852   42258 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0725 18:20:49.450858   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450862   42258 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0725 18:20:49.450865   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450869   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450876   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0725 18:20:49.450885   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0725 18:20:49.450888   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450892   42258 command_runner.go:130] >       "size": "750414",
	I0725 18:20:49.450898   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450902   42258 command_runner.go:130] >         "value": "65535"
	I0725 18:20:49.450909   42258 command_runner.go:130] >       },
	I0725 18:20:49.450912   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450918   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450922   42258 command_runner.go:130] >       "pinned": true
	I0725 18:20:49.450927   42258 command_runner.go:130] >     }
	I0725 18:20:49.450930   42258 command_runner.go:130] >   ]
	I0725 18:20:49.450933   42258 command_runner.go:130] > }
	I0725 18:20:49.451115   42258 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:20:49.451129   42258 crio.go:433] Images already preloaded, skipping extraction
	I0725 18:20:49.451196   42258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:20:49.481528   42258 command_runner.go:130] > {
	I0725 18:20:49.481546   42258 command_runner.go:130] >   "images": [
	I0725 18:20:49.481553   42258 command_runner.go:130] >     {
	I0725 18:20:49.481563   42258 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0725 18:20:49.481569   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481574   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0725 18:20:49.481578   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481582   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481590   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0725 18:20:49.481597   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0725 18:20:49.481602   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481607   42258 command_runner.go:130] >       "size": "87165492",
	I0725 18:20:49.481615   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481620   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481627   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481630   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481634   42258 command_runner.go:130] >     },
	I0725 18:20:49.481638   42258 command_runner.go:130] >     {
	I0725 18:20:49.481644   42258 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0725 18:20:49.481650   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481655   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0725 18:20:49.481659   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481663   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481671   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0725 18:20:49.481677   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0725 18:20:49.481683   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481687   42258 command_runner.go:130] >       "size": "87174707",
	I0725 18:20:49.481691   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481697   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481703   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481707   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481711   42258 command_runner.go:130] >     },
	I0725 18:20:49.481715   42258 command_runner.go:130] >     {
	I0725 18:20:49.481721   42258 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0725 18:20:49.481725   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481730   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0725 18:20:49.481734   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481738   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481747   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0725 18:20:49.481754   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0725 18:20:49.481758   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481762   42258 command_runner.go:130] >       "size": "1363676",
	I0725 18:20:49.481766   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481774   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481778   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481781   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481785   42258 command_runner.go:130] >     },
	I0725 18:20:49.481789   42258 command_runner.go:130] >     {
	I0725 18:20:49.481796   42258 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0725 18:20:49.481800   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481807   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0725 18:20:49.481811   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481815   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481822   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0725 18:20:49.481834   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0725 18:20:49.481839   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481844   42258 command_runner.go:130] >       "size": "31470524",
	I0725 18:20:49.481850   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481854   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481860   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481864   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481868   42258 command_runner.go:130] >     },
	I0725 18:20:49.481871   42258 command_runner.go:130] >     {
	I0725 18:20:49.481877   42258 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0725 18:20:49.481884   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481889   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0725 18:20:49.481893   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481897   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481904   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0725 18:20:49.481913   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0725 18:20:49.481917   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481923   42258 command_runner.go:130] >       "size": "61245718",
	I0725 18:20:49.481926   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481931   42258 command_runner.go:130] >       "username": "nonroot",
	I0725 18:20:49.481935   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481941   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481944   42258 command_runner.go:130] >     },
	I0725 18:20:49.481948   42258 command_runner.go:130] >     {
	I0725 18:20:49.481953   42258 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0725 18:20:49.481959   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481966   42258 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0725 18:20:49.481974   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481979   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481989   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0725 18:20:49.482002   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0725 18:20:49.482010   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482016   42258 command_runner.go:130] >       "size": "150779692",
	I0725 18:20:49.482022   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482026   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482031   42258 command_runner.go:130] >       },
	I0725 18:20:49.482035   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482040   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482045   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482052   42258 command_runner.go:130] >     },
	I0725 18:20:49.482057   42258 command_runner.go:130] >     {
	I0725 18:20:49.482070   42258 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0725 18:20:49.482079   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482086   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0725 18:20:49.482094   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482101   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482115   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0725 18:20:49.482129   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0725 18:20:49.482137   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482141   42258 command_runner.go:130] >       "size": "117609954",
	I0725 18:20:49.482148   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482152   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482158   42258 command_runner.go:130] >       },
	I0725 18:20:49.482162   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482165   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482171   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482176   42258 command_runner.go:130] >     },
	I0725 18:20:49.482179   42258 command_runner.go:130] >     {
	I0725 18:20:49.482185   42258 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0725 18:20:49.482191   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482197   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0725 18:20:49.482201   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482205   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482221   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0725 18:20:49.482231   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0725 18:20:49.482236   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482241   42258 command_runner.go:130] >       "size": "112198984",
	I0725 18:20:49.482246   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482250   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482256   42258 command_runner.go:130] >       },
	I0725 18:20:49.482260   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482264   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482270   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482274   42258 command_runner.go:130] >     },
	I0725 18:20:49.482277   42258 command_runner.go:130] >     {
	I0725 18:20:49.482283   42258 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0725 18:20:49.482290   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482294   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0725 18:20:49.482299   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482303   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482312   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0725 18:20:49.482319   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0725 18:20:49.482324   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482328   42258 command_runner.go:130] >       "size": "85953945",
	I0725 18:20:49.482332   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.482336   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482340   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482344   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482347   42258 command_runner.go:130] >     },
	I0725 18:20:49.482351   42258 command_runner.go:130] >     {
	I0725 18:20:49.482357   42258 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0725 18:20:49.482363   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482369   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0725 18:20:49.482374   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482378   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482385   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0725 18:20:49.482394   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0725 18:20:49.482398   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482402   42258 command_runner.go:130] >       "size": "63051080",
	I0725 18:20:49.482406   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482410   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482414   42258 command_runner.go:130] >       },
	I0725 18:20:49.482420   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482425   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482429   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482432   42258 command_runner.go:130] >     },
	I0725 18:20:49.482439   42258 command_runner.go:130] >     {
	I0725 18:20:49.482449   42258 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0725 18:20:49.482459   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482469   42258 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0725 18:20:49.482474   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482482   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482491   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0725 18:20:49.482504   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0725 18:20:49.482512   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482523   42258 command_runner.go:130] >       "size": "750414",
	I0725 18:20:49.482531   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482537   42258 command_runner.go:130] >         "value": "65535"
	I0725 18:20:49.482543   42258 command_runner.go:130] >       },
	I0725 18:20:49.482548   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482555   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482560   42258 command_runner.go:130] >       "pinned": true
	I0725 18:20:49.482569   42258 command_runner.go:130] >     }
	I0725 18:20:49.482576   42258 command_runner.go:130] >   ]
	I0725 18:20:49.482580   42258 command_runner.go:130] > }
	I0725 18:20:49.482857   42258 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:20:49.482875   42258 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:20:49.482884   42258 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.30.3 crio true true} ...
	I0725 18:20:49.482981   42258 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-253131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:20:49.483042   42258 ssh_runner.go:195] Run: crio config
	I0725 18:20:49.526034   42258 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0725 18:20:49.526071   42258 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0725 18:20:49.526084   42258 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0725 18:20:49.526089   42258 command_runner.go:130] > #
	I0725 18:20:49.526100   42258 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0725 18:20:49.526107   42258 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0725 18:20:49.526113   42258 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0725 18:20:49.526120   42258 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0725 18:20:49.526129   42258 command_runner.go:130] > # reload'.
	I0725 18:20:49.526135   42258 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0725 18:20:49.526141   42258 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0725 18:20:49.526147   42258 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0725 18:20:49.526155   42258 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0725 18:20:49.526160   42258 command_runner.go:130] > [crio]
	I0725 18:20:49.526170   42258 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0725 18:20:49.526181   42258 command_runner.go:130] > # containers images, in this directory.
	I0725 18:20:49.526188   42258 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0725 18:20:49.526204   42258 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0725 18:20:49.526213   42258 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0725 18:20:49.526223   42258 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0725 18:20:49.526233   42258 command_runner.go:130] > # imagestore = ""
	I0725 18:20:49.526241   42258 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0725 18:20:49.526252   42258 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0725 18:20:49.526260   42258 command_runner.go:130] > storage_driver = "overlay"
	I0725 18:20:49.526272   42258 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0725 18:20:49.526281   42258 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0725 18:20:49.526290   42258 command_runner.go:130] > storage_option = [
	I0725 18:20:49.526298   42258 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0725 18:20:49.526306   42258 command_runner.go:130] > ]
	I0725 18:20:49.526316   42258 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0725 18:20:49.526325   42258 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0725 18:20:49.526335   42258 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0725 18:20:49.526347   42258 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0725 18:20:49.526359   42258 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0725 18:20:49.526366   42258 command_runner.go:130] > # always happen on a node reboot
	I0725 18:20:49.526371   42258 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0725 18:20:49.526380   42258 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0725 18:20:49.526388   42258 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0725 18:20:49.526393   42258 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0725 18:20:49.526399   42258 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0725 18:20:49.526406   42258 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0725 18:20:49.526422   42258 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0725 18:20:49.526430   42258 command_runner.go:130] > # internal_wipe = true
	I0725 18:20:49.526443   42258 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0725 18:20:49.526453   42258 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0725 18:20:49.526460   42258 command_runner.go:130] > # internal_repair = false
	I0725 18:20:49.526472   42258 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0725 18:20:49.526484   42258 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0725 18:20:49.526495   42258 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0725 18:20:49.526506   42258 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0725 18:20:49.526515   42258 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0725 18:20:49.526530   42258 command_runner.go:130] > [crio.api]
	I0725 18:20:49.526538   42258 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0725 18:20:49.526548   42258 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0725 18:20:49.526556   42258 command_runner.go:130] > # IP address on which the stream server will listen.
	I0725 18:20:49.526566   42258 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0725 18:20:49.526577   42258 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0725 18:20:49.526588   42258 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0725 18:20:49.526595   42258 command_runner.go:130] > # stream_port = "0"
	I0725 18:20:49.526605   42258 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0725 18:20:49.526614   42258 command_runner.go:130] > # stream_enable_tls = false
	I0725 18:20:49.526624   42258 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0725 18:20:49.526633   42258 command_runner.go:130] > # stream_idle_timeout = ""
	I0725 18:20:49.526642   42258 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0725 18:20:49.526654   42258 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0725 18:20:49.526663   42258 command_runner.go:130] > # minutes.
	I0725 18:20:49.526673   42258 command_runner.go:130] > # stream_tls_cert = ""
	I0725 18:20:49.526686   42258 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0725 18:20:49.526699   42258 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0725 18:20:49.526709   42258 command_runner.go:130] > # stream_tls_key = ""
	I0725 18:20:49.526719   42258 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0725 18:20:49.526731   42258 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0725 18:20:49.526750   42258 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0725 18:20:49.526759   42258 command_runner.go:130] > # stream_tls_ca = ""
	I0725 18:20:49.526770   42258 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0725 18:20:49.526779   42258 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0725 18:20:49.526791   42258 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0725 18:20:49.526801   42258 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0725 18:20:49.526814   42258 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0725 18:20:49.526826   42258 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0725 18:20:49.526832   42258 command_runner.go:130] > [crio.runtime]
	I0725 18:20:49.526840   42258 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0725 18:20:49.526852   42258 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0725 18:20:49.526861   42258 command_runner.go:130] > # "nofile=1024:2048"
	I0725 18:20:49.526870   42258 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0725 18:20:49.526880   42258 command_runner.go:130] > # default_ulimits = [
	I0725 18:20:49.526885   42258 command_runner.go:130] > # ]
	I0725 18:20:49.526897   42258 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0725 18:20:49.526906   42258 command_runner.go:130] > # no_pivot = false
	I0725 18:20:49.526915   42258 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0725 18:20:49.526928   42258 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0725 18:20:49.526939   42258 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0725 18:20:49.526951   42258 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0725 18:20:49.526961   42258 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0725 18:20:49.526975   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0725 18:20:49.526985   42258 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0725 18:20:49.526993   42258 command_runner.go:130] > # Cgroup setting for conmon
	I0725 18:20:49.527006   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0725 18:20:49.527012   42258 command_runner.go:130] > conmon_cgroup = "pod"
	I0725 18:20:49.527023   42258 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0725 18:20:49.527034   42258 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0725 18:20:49.527050   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0725 18:20:49.527058   42258 command_runner.go:130] > conmon_env = [
	I0725 18:20:49.527068   42258 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0725 18:20:49.527076   42258 command_runner.go:130] > ]
	I0725 18:20:49.527087   42258 command_runner.go:130] > # Additional environment variables to set for all the
	I0725 18:20:49.527099   42258 command_runner.go:130] > # containers. These are overridden if set in the
	I0725 18:20:49.527108   42258 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0725 18:20:49.527117   42258 command_runner.go:130] > # default_env = [
	I0725 18:20:49.527121   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527126   42258 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0725 18:20:49.527136   42258 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0725 18:20:49.527142   42258 command_runner.go:130] > # selinux = false
	I0725 18:20:49.527152   42258 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0725 18:20:49.527164   42258 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0725 18:20:49.527176   42258 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0725 18:20:49.527186   42258 command_runner.go:130] > # seccomp_profile = ""
	I0725 18:20:49.527195   42258 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0725 18:20:49.527206   42258 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0725 18:20:49.527217   42258 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0725 18:20:49.527226   42258 command_runner.go:130] > # which might increase security.
	I0725 18:20:49.527233   42258 command_runner.go:130] > # This option is currently deprecated,
	I0725 18:20:49.527245   42258 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0725 18:20:49.527256   42258 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0725 18:20:49.527270   42258 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0725 18:20:49.527283   42258 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0725 18:20:49.527293   42258 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0725 18:20:49.527305   42258 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0725 18:20:49.527313   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.527323   42258 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0725 18:20:49.527333   42258 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0725 18:20:49.527343   42258 command_runner.go:130] > # the cgroup blockio controller.
	I0725 18:20:49.527350   42258 command_runner.go:130] > # blockio_config_file = ""
	I0725 18:20:49.527363   42258 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0725 18:20:49.527373   42258 command_runner.go:130] > # blockio parameters.
	I0725 18:20:49.527380   42258 command_runner.go:130] > # blockio_reload = false
	I0725 18:20:49.527392   42258 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0725 18:20:49.527402   42258 command_runner.go:130] > # irqbalance daemon.
	I0725 18:20:49.527413   42258 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0725 18:20:49.527425   42258 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0725 18:20:49.527439   42258 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0725 18:20:49.527454   42258 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0725 18:20:49.527467   42258 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0725 18:20:49.527478   42258 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0725 18:20:49.527489   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.527497   42258 command_runner.go:130] > # rdt_config_file = ""
	I0725 18:20:49.527506   42258 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0725 18:20:49.527516   42258 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0725 18:20:49.527580   42258 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0725 18:20:49.527596   42258 command_runner.go:130] > # separate_pull_cgroup = ""
	I0725 18:20:49.527605   42258 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0725 18:20:49.527615   42258 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0725 18:20:49.527624   42258 command_runner.go:130] > # will be added.
	I0725 18:20:49.527632   42258 command_runner.go:130] > # default_capabilities = [
	I0725 18:20:49.527640   42258 command_runner.go:130] > # 	"CHOWN",
	I0725 18:20:49.527646   42258 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0725 18:20:49.527655   42258 command_runner.go:130] > # 	"FSETID",
	I0725 18:20:49.527661   42258 command_runner.go:130] > # 	"FOWNER",
	I0725 18:20:49.527670   42258 command_runner.go:130] > # 	"SETGID",
	I0725 18:20:49.527676   42258 command_runner.go:130] > # 	"SETUID",
	I0725 18:20:49.527685   42258 command_runner.go:130] > # 	"SETPCAP",
	I0725 18:20:49.527691   42258 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0725 18:20:49.527700   42258 command_runner.go:130] > # 	"KILL",
	I0725 18:20:49.527705   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527720   42258 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0725 18:20:49.527733   42258 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0725 18:20:49.527743   42258 command_runner.go:130] > # add_inheritable_capabilities = false
	I0725 18:20:49.527756   42258 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0725 18:20:49.527766   42258 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0725 18:20:49.527775   42258 command_runner.go:130] > default_sysctls = [
	I0725 18:20:49.527783   42258 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0725 18:20:49.527791   42258 command_runner.go:130] > ]
	I0725 18:20:49.527798   42258 command_runner.go:130] > # List of devices on the host that a
	I0725 18:20:49.527811   42258 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0725 18:20:49.527817   42258 command_runner.go:130] > # allowed_devices = [
	I0725 18:20:49.527826   42258 command_runner.go:130] > # 	"/dev/fuse",
	I0725 18:20:49.527831   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527984   42258 command_runner.go:130] > # List of additional devices. specified as
	I0725 18:20:49.528008   42258 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0725 18:20:49.528023   42258 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0725 18:20:49.528036   42258 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0725 18:20:49.528047   42258 command_runner.go:130] > # additional_devices = [
	I0725 18:20:49.528056   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528111   42258 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0725 18:20:49.528126   42258 command_runner.go:130] > # cdi_spec_dirs = [
	I0725 18:20:49.528130   42258 command_runner.go:130] > # 	"/etc/cdi",
	I0725 18:20:49.528145   42258 command_runner.go:130] > # 	"/var/run/cdi",
	I0725 18:20:49.528155   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528167   42258 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0725 18:20:49.528182   42258 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0725 18:20:49.528191   42258 command_runner.go:130] > # Defaults to false.
	I0725 18:20:49.528199   42258 command_runner.go:130] > # device_ownership_from_security_context = false
	I0725 18:20:49.528212   42258 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0725 18:20:49.528223   42258 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0725 18:20:49.528232   42258 command_runner.go:130] > # hooks_dir = [
	I0725 18:20:49.528241   42258 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0725 18:20:49.528249   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528264   42258 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0725 18:20:49.528278   42258 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0725 18:20:49.528291   42258 command_runner.go:130] > # its default mounts from the following two files:
	I0725 18:20:49.528298   42258 command_runner.go:130] > #
	I0725 18:20:49.528308   42258 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0725 18:20:49.528334   42258 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0725 18:20:49.528347   42258 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0725 18:20:49.528352   42258 command_runner.go:130] > #
	I0725 18:20:49.528364   42258 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0725 18:20:49.528377   42258 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0725 18:20:49.528390   42258 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0725 18:20:49.528400   42258 command_runner.go:130] > #      only add mounts it finds in this file.
	I0725 18:20:49.528408   42258 command_runner.go:130] > #
	I0725 18:20:49.528415   42258 command_runner.go:130] > # default_mounts_file = ""
	I0725 18:20:49.528427   42258 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0725 18:20:49.528440   42258 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0725 18:20:49.528451   42258 command_runner.go:130] > pids_limit = 1024
	I0725 18:20:49.528463   42258 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0725 18:20:49.528476   42258 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0725 18:20:49.528485   42258 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0725 18:20:49.528499   42258 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0725 18:20:49.528507   42258 command_runner.go:130] > # log_size_max = -1
	I0725 18:20:49.528517   42258 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0725 18:20:49.528523   42258 command_runner.go:130] > # log_to_journald = false
	I0725 18:20:49.528537   42258 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0725 18:20:49.528547   42258 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0725 18:20:49.528556   42258 command_runner.go:130] > # Path to directory for container attach sockets.
	I0725 18:20:49.528570   42258 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0725 18:20:49.528581   42258 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0725 18:20:49.528591   42258 command_runner.go:130] > # bind_mount_prefix = ""
	I0725 18:20:49.528602   42258 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0725 18:20:49.528611   42258 command_runner.go:130] > # read_only = false
	I0725 18:20:49.528620   42258 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0725 18:20:49.528628   42258 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0725 18:20:49.528635   42258 command_runner.go:130] > # live configuration reload.
	I0725 18:20:49.528639   42258 command_runner.go:130] > # log_level = "info"
	I0725 18:20:49.528645   42258 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0725 18:20:49.528652   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.528657   42258 command_runner.go:130] > # log_filter = ""
	I0725 18:20:49.528665   42258 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0725 18:20:49.528673   42258 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0725 18:20:49.528677   42258 command_runner.go:130] > # separated by comma.
	I0725 18:20:49.528685   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528691   42258 command_runner.go:130] > # uid_mappings = ""
	I0725 18:20:49.528697   42258 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0725 18:20:49.528705   42258 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0725 18:20:49.528709   42258 command_runner.go:130] > # separated by comma.
	I0725 18:20:49.528716   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528722   42258 command_runner.go:130] > # gid_mappings = ""
	I0725 18:20:49.528728   42258 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0725 18:20:49.528736   42258 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0725 18:20:49.528744   42258 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0725 18:20:49.528759   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528766   42258 command_runner.go:130] > # minimum_mappable_uid = -1
	I0725 18:20:49.528772   42258 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0725 18:20:49.528779   42258 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0725 18:20:49.528786   42258 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0725 18:20:49.528795   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528801   42258 command_runner.go:130] > # minimum_mappable_gid = -1
	I0725 18:20:49.528806   42258 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0725 18:20:49.528814   42258 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0725 18:20:49.528821   42258 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0725 18:20:49.528832   42258 command_runner.go:130] > # ctr_stop_timeout = 30
	I0725 18:20:49.528840   42258 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0725 18:20:49.528845   42258 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0725 18:20:49.528852   42258 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0725 18:20:49.528857   42258 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0725 18:20:49.528863   42258 command_runner.go:130] > drop_infra_ctr = false
	I0725 18:20:49.528869   42258 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0725 18:20:49.528876   42258 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0725 18:20:49.528885   42258 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0725 18:20:49.528891   42258 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0725 18:20:49.528898   42258 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0725 18:20:49.528905   42258 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0725 18:20:49.528913   42258 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0725 18:20:49.528920   42258 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0725 18:20:49.528924   42258 command_runner.go:130] > # shared_cpuset = ""
	I0725 18:20:49.528931   42258 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0725 18:20:49.528936   42258 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0725 18:20:49.528943   42258 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0725 18:20:49.528950   42258 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0725 18:20:49.528956   42258 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0725 18:20:49.528961   42258 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0725 18:20:49.528969   42258 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0725 18:20:49.528975   42258 command_runner.go:130] > # enable_criu_support = false
	I0725 18:20:49.528981   42258 command_runner.go:130] > # Enable/disable the generation of the container,
	I0725 18:20:49.528988   42258 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0725 18:20:49.528993   42258 command_runner.go:130] > # enable_pod_events = false
	I0725 18:20:49.528999   42258 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0725 18:20:49.529007   42258 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0725 18:20:49.529012   42258 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0725 18:20:49.529018   42258 command_runner.go:130] > # default_runtime = "runc"
	I0725 18:20:49.529023   42258 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0725 18:20:49.529032   42258 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0725 18:20:49.529042   42258 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0725 18:20:49.529052   42258 command_runner.go:130] > # creation as a file is not desired either.
	I0725 18:20:49.529062   42258 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0725 18:20:49.529069   42258 command_runner.go:130] > # the hostname is being managed dynamically.
	I0725 18:20:49.529073   42258 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0725 18:20:49.529079   42258 command_runner.go:130] > # ]
	I0725 18:20:49.529085   42258 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0725 18:20:49.529094   42258 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0725 18:20:49.529099   42258 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0725 18:20:49.529104   42258 command_runner.go:130] > # Each entry in the table should follow the format:
	I0725 18:20:49.529110   42258 command_runner.go:130] > #
	I0725 18:20:49.529114   42258 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0725 18:20:49.529119   42258 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0725 18:20:49.529140   42258 command_runner.go:130] > # runtime_type = "oci"
	I0725 18:20:49.529146   42258 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0725 18:20:49.529151   42258 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0725 18:20:49.529157   42258 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0725 18:20:49.529162   42258 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0725 18:20:49.529168   42258 command_runner.go:130] > # monitor_env = []
	I0725 18:20:49.529172   42258 command_runner.go:130] > # privileged_without_host_devices = false
	I0725 18:20:49.529178   42258 command_runner.go:130] > # allowed_annotations = []
	I0725 18:20:49.529183   42258 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0725 18:20:49.529189   42258 command_runner.go:130] > # Where:
	I0725 18:20:49.529194   42258 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0725 18:20:49.529202   42258 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0725 18:20:49.529208   42258 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0725 18:20:49.529216   42258 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0725 18:20:49.529221   42258 command_runner.go:130] > #   in $PATH.
	I0725 18:20:49.529227   42258 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0725 18:20:49.529234   42258 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0725 18:20:49.529240   42258 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0725 18:20:49.529246   42258 command_runner.go:130] > #   state.
	I0725 18:20:49.529251   42258 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0725 18:20:49.529277   42258 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0725 18:20:49.529285   42258 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0725 18:20:49.529290   42258 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0725 18:20:49.529298   42258 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0725 18:20:49.529305   42258 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0725 18:20:49.529311   42258 command_runner.go:130] > #   The currently recognized values are:
	I0725 18:20:49.529321   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0725 18:20:49.529330   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0725 18:20:49.529338   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0725 18:20:49.529346   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0725 18:20:49.529353   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0725 18:20:49.529362   42258 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0725 18:20:49.529370   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0725 18:20:49.529376   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0725 18:20:49.529384   42258 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0725 18:20:49.529392   42258 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0725 18:20:49.529399   42258 command_runner.go:130] > #   deprecated option "conmon".
	I0725 18:20:49.529405   42258 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0725 18:20:49.529412   42258 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0725 18:20:49.529419   42258 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0725 18:20:49.529425   42258 command_runner.go:130] > #   should be moved to the container's cgroup
	I0725 18:20:49.529432   42258 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0725 18:20:49.529438   42258 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0725 18:20:49.529444   42258 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0725 18:20:49.529451   42258 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0725 18:20:49.529454   42258 command_runner.go:130] > #
	I0725 18:20:49.529459   42258 command_runner.go:130] > # Using the seccomp notifier feature:
	I0725 18:20:49.529463   42258 command_runner.go:130] > #
	I0725 18:20:49.529469   42258 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0725 18:20:49.529487   42258 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0725 18:20:49.529493   42258 command_runner.go:130] > #
	I0725 18:20:49.529499   42258 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0725 18:20:49.529509   42258 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0725 18:20:49.529515   42258 command_runner.go:130] > #
	I0725 18:20:49.529521   42258 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0725 18:20:49.529525   42258 command_runner.go:130] > # feature.
	I0725 18:20:49.529528   42258 command_runner.go:130] > #
	I0725 18:20:49.529536   42258 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0725 18:20:49.529544   42258 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0725 18:20:49.529550   42258 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0725 18:20:49.529559   42258 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0725 18:20:49.529567   42258 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0725 18:20:49.529572   42258 command_runner.go:130] > #
	I0725 18:20:49.529578   42258 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0725 18:20:49.529586   42258 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0725 18:20:49.529591   42258 command_runner.go:130] > #
	I0725 18:20:49.529596   42258 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0725 18:20:49.529604   42258 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0725 18:20:49.529608   42258 command_runner.go:130] > #
	I0725 18:20:49.529614   42258 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0725 18:20:49.529622   42258 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0725 18:20:49.529627   42258 command_runner.go:130] > # limitation.
	I0725 18:20:49.529631   42258 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0725 18:20:49.529635   42258 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0725 18:20:49.529641   42258 command_runner.go:130] > runtime_type = "oci"
	I0725 18:20:49.529645   42258 command_runner.go:130] > runtime_root = "/run/runc"
	I0725 18:20:49.529650   42258 command_runner.go:130] > runtime_config_path = ""
	I0725 18:20:49.529655   42258 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0725 18:20:49.529661   42258 command_runner.go:130] > monitor_cgroup = "pod"
	I0725 18:20:49.529665   42258 command_runner.go:130] > monitor_exec_cgroup = ""
	I0725 18:20:49.529671   42258 command_runner.go:130] > monitor_env = [
	I0725 18:20:49.529676   42258 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0725 18:20:49.529682   42258 command_runner.go:130] > ]
	I0725 18:20:49.529687   42258 command_runner.go:130] > privileged_without_host_devices = false
	I0725 18:20:49.529695   42258 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0725 18:20:49.529702   42258 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0725 18:20:49.529707   42258 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0725 18:20:49.529716   42258 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0725 18:20:49.529724   42258 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0725 18:20:49.529732   42258 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0725 18:20:49.529743   42258 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0725 18:20:49.529752   42258 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0725 18:20:49.529759   42258 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0725 18:20:49.529768   42258 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0725 18:20:49.529773   42258 command_runner.go:130] > # Example:
	I0725 18:20:49.529777   42258 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0725 18:20:49.529781   42258 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0725 18:20:49.529786   42258 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0725 18:20:49.529790   42258 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0725 18:20:49.529794   42258 command_runner.go:130] > # cpuset = 0
	I0725 18:20:49.529797   42258 command_runner.go:130] > # cpushares = "0-1"
	I0725 18:20:49.529800   42258 command_runner.go:130] > # Where:
	I0725 18:20:49.529804   42258 command_runner.go:130] > # The workload name is workload-type.
	I0725 18:20:49.529810   42258 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0725 18:20:49.529815   42258 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0725 18:20:49.529820   42258 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0725 18:20:49.529826   42258 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0725 18:20:49.529831   42258 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0725 18:20:49.529836   42258 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0725 18:20:49.529842   42258 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0725 18:20:49.529845   42258 command_runner.go:130] > # Default value is set to true
	I0725 18:20:49.529849   42258 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0725 18:20:49.529854   42258 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0725 18:20:49.529859   42258 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0725 18:20:49.529863   42258 command_runner.go:130] > # Default value is set to 'false'
	I0725 18:20:49.529866   42258 command_runner.go:130] > # disable_hostport_mapping = false
	I0725 18:20:49.529872   42258 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0725 18:20:49.529874   42258 command_runner.go:130] > #
	I0725 18:20:49.529879   42258 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0725 18:20:49.529888   42258 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0725 18:20:49.529893   42258 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0725 18:20:49.529898   42258 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0725 18:20:49.529903   42258 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0725 18:20:49.529907   42258 command_runner.go:130] > [crio.image]
	I0725 18:20:49.529912   42258 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0725 18:20:49.529917   42258 command_runner.go:130] > # default_transport = "docker://"
	I0725 18:20:49.529922   42258 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0725 18:20:49.529928   42258 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0725 18:20:49.529932   42258 command_runner.go:130] > # global_auth_file = ""
	I0725 18:20:49.529937   42258 command_runner.go:130] > # The image used to instantiate infra containers.
	I0725 18:20:49.529941   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.529946   42258 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0725 18:20:49.529951   42258 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0725 18:20:49.529959   42258 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0725 18:20:49.529965   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.529974   42258 command_runner.go:130] > # pause_image_auth_file = ""
	I0725 18:20:49.529981   42258 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0725 18:20:49.529987   42258 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0725 18:20:49.529993   42258 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0725 18:20:49.529999   42258 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0725 18:20:49.530005   42258 command_runner.go:130] > # pause_command = "/pause"
	I0725 18:20:49.530010   42258 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0725 18:20:49.530018   42258 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0725 18:20:49.530024   42258 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0725 18:20:49.530032   42258 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0725 18:20:49.530038   42258 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0725 18:20:49.530043   42258 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0725 18:20:49.530050   42258 command_runner.go:130] > # pinned_images = [
	I0725 18:20:49.530053   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530058   42258 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0725 18:20:49.530065   42258 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0725 18:20:49.530071   42258 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0725 18:20:49.530078   42258 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0725 18:20:49.530083   42258 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0725 18:20:49.530089   42258 command_runner.go:130] > # signature_policy = ""
	I0725 18:20:49.530094   42258 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0725 18:20:49.530102   42258 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0725 18:20:49.530108   42258 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0725 18:20:49.530117   42258 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0725 18:20:49.530122   42258 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0725 18:20:49.530127   42258 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0725 18:20:49.530135   42258 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0725 18:20:49.530141   42258 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0725 18:20:49.530147   42258 command_runner.go:130] > # changing them here.
	I0725 18:20:49.530159   42258 command_runner.go:130] > # insecure_registries = [
	I0725 18:20:49.530166   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530173   42258 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0725 18:20:49.530181   42258 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0725 18:20:49.530187   42258 command_runner.go:130] > # image_volumes = "mkdir"
	I0725 18:20:49.530195   42258 command_runner.go:130] > # Temporary directory to use for storing big files
	I0725 18:20:49.530204   42258 command_runner.go:130] > # big_files_temporary_dir = ""
	I0725 18:20:49.530212   42258 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0725 18:20:49.530221   42258 command_runner.go:130] > # CNI plugins.
	I0725 18:20:49.530225   42258 command_runner.go:130] > [crio.network]
	I0725 18:20:49.530230   42258 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0725 18:20:49.530236   42258 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0725 18:20:49.530240   42258 command_runner.go:130] > # cni_default_network = ""
	I0725 18:20:49.530246   42258 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0725 18:20:49.530253   42258 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0725 18:20:49.530262   42258 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0725 18:20:49.530268   42258 command_runner.go:130] > # plugin_dirs = [
	I0725 18:20:49.530273   42258 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0725 18:20:49.530276   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530281   42258 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0725 18:20:49.530285   42258 command_runner.go:130] > [crio.metrics]
	I0725 18:20:49.530290   42258 command_runner.go:130] > # Globally enable or disable metrics support.
	I0725 18:20:49.530296   42258 command_runner.go:130] > enable_metrics = true
	I0725 18:20:49.530303   42258 command_runner.go:130] > # Specify enabled metrics collectors.
	I0725 18:20:49.530313   42258 command_runner.go:130] > # Per default all metrics are enabled.
	I0725 18:20:49.530323   42258 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0725 18:20:49.530334   42258 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0725 18:20:49.530344   42258 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0725 18:20:49.530351   42258 command_runner.go:130] > # metrics_collectors = [
	I0725 18:20:49.530359   42258 command_runner.go:130] > # 	"operations",
	I0725 18:20:49.530367   42258 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0725 18:20:49.530377   42258 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0725 18:20:49.530387   42258 command_runner.go:130] > # 	"operations_errors",
	I0725 18:20:49.530397   42258 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0725 18:20:49.530403   42258 command_runner.go:130] > # 	"image_pulls_by_name",
	I0725 18:20:49.530407   42258 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0725 18:20:49.530413   42258 command_runner.go:130] > # 	"image_pulls_failures",
	I0725 18:20:49.530417   42258 command_runner.go:130] > # 	"image_pulls_successes",
	I0725 18:20:49.530423   42258 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0725 18:20:49.530427   42258 command_runner.go:130] > # 	"image_layer_reuse",
	I0725 18:20:49.530434   42258 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0725 18:20:49.530438   42258 command_runner.go:130] > # 	"containers_oom_total",
	I0725 18:20:49.530444   42258 command_runner.go:130] > # 	"containers_oom",
	I0725 18:20:49.530448   42258 command_runner.go:130] > # 	"processes_defunct",
	I0725 18:20:49.530454   42258 command_runner.go:130] > # 	"operations_total",
	I0725 18:20:49.530458   42258 command_runner.go:130] > # 	"operations_latency_seconds",
	I0725 18:20:49.530465   42258 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0725 18:20:49.530469   42258 command_runner.go:130] > # 	"operations_errors_total",
	I0725 18:20:49.530475   42258 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0725 18:20:49.530480   42258 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0725 18:20:49.530486   42258 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0725 18:20:49.530491   42258 command_runner.go:130] > # 	"image_pulls_success_total",
	I0725 18:20:49.530498   42258 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0725 18:20:49.530502   42258 command_runner.go:130] > # 	"containers_oom_count_total",
	I0725 18:20:49.530510   42258 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0725 18:20:49.530514   42258 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0725 18:20:49.530518   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530525   42258 command_runner.go:130] > # The port on which the metrics server will listen.
	I0725 18:20:49.530529   42258 command_runner.go:130] > # metrics_port = 9090
	I0725 18:20:49.530536   42258 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0725 18:20:49.530540   42258 command_runner.go:130] > # metrics_socket = ""
	I0725 18:20:49.530546   42258 command_runner.go:130] > # The certificate for the secure metrics server.
	I0725 18:20:49.530554   42258 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0725 18:20:49.530562   42258 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0725 18:20:49.530568   42258 command_runner.go:130] > # certificate on any modification event.
	I0725 18:20:49.530572   42258 command_runner.go:130] > # metrics_cert = ""
	I0725 18:20:49.530578   42258 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0725 18:20:49.530583   42258 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0725 18:20:49.530589   42258 command_runner.go:130] > # metrics_key = ""
	I0725 18:20:49.530595   42258 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0725 18:20:49.530601   42258 command_runner.go:130] > [crio.tracing]
	I0725 18:20:49.530605   42258 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0725 18:20:49.530609   42258 command_runner.go:130] > # enable_tracing = false
	I0725 18:20:49.530615   42258 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0725 18:20:49.530622   42258 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0725 18:20:49.530628   42258 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0725 18:20:49.530635   42258 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0725 18:20:49.530639   42258 command_runner.go:130] > # CRI-O NRI configuration.
	I0725 18:20:49.530643   42258 command_runner.go:130] > [crio.nri]
	I0725 18:20:49.530647   42258 command_runner.go:130] > # Globally enable or disable NRI.
	I0725 18:20:49.530653   42258 command_runner.go:130] > # enable_nri = false
	I0725 18:20:49.530657   42258 command_runner.go:130] > # NRI socket to listen on.
	I0725 18:20:49.530663   42258 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0725 18:20:49.530668   42258 command_runner.go:130] > # NRI plugin directory to use.
	I0725 18:20:49.530674   42258 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0725 18:20:49.530679   42258 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0725 18:20:49.530686   42258 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0725 18:20:49.530691   42258 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0725 18:20:49.530696   42258 command_runner.go:130] > # nri_disable_connections = false
	I0725 18:20:49.530701   42258 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0725 18:20:49.530707   42258 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0725 18:20:49.530712   42258 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0725 18:20:49.530718   42258 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0725 18:20:49.530724   42258 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0725 18:20:49.530730   42258 command_runner.go:130] > [crio.stats]
	I0725 18:20:49.530735   42258 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0725 18:20:49.530742   42258 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0725 18:20:49.530746   42258 command_runner.go:130] > # stats_collection_period = 0
	I0725 18:20:49.530770   42258 command_runner.go:130] ! time="2024-07-25 18:20:49.491406837Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0725 18:20:49.530787   42258 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0725 18:20:49.530889   42258 cni.go:84] Creating CNI manager for ""
	I0725 18:20:49.530899   42258 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0725 18:20:49.530907   42258 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:20:49.530925   42258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-253131 NodeName:multinode-253131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:20:49.531048   42258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-253131"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:20:49.531109   42258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:20:49.540359   42258 command_runner.go:130] > kubeadm
	I0725 18:20:49.540380   42258 command_runner.go:130] > kubectl
	I0725 18:20:49.540384   42258 command_runner.go:130] > kubelet
	I0725 18:20:49.540404   42258 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:20:49.540456   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:20:49.549110   42258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0725 18:20:49.564714   42258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:20:49.580521   42258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0725 18:20:49.595696   42258 ssh_runner.go:195] Run: grep 192.168.39.54	control-plane.minikube.internal$ /etc/hosts
	I0725 18:20:49.599030   42258 command_runner.go:130] > 192.168.39.54	control-plane.minikube.internal
	I0725 18:20:49.599192   42258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:20:49.733120   42258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:20:49.747279   42258 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131 for IP: 192.168.39.54
	I0725 18:20:49.747305   42258 certs.go:194] generating shared ca certs ...
	I0725 18:20:49.747325   42258 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:20:49.747512   42258 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:20:49.747567   42258 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:20:49.747579   42258 certs.go:256] generating profile certs ...
	I0725 18:20:49.747672   42258 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/client.key
	I0725 18:20:49.747751   42258 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key.64a64755
	I0725 18:20:49.747797   42258 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key
	I0725 18:20:49.747808   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 18:20:49.747820   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 18:20:49.747832   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 18:20:49.747845   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 18:20:49.747858   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 18:20:49.747871   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 18:20:49.747884   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 18:20:49.747896   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 18:20:49.747942   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:20:49.747970   42258 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:20:49.747976   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:20:49.747996   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:20:49.748013   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:20:49.748032   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:20:49.748068   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:20:49.748101   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:49.748119   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 18:20:49.748136   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 18:20:49.748710   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:20:49.771868   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:20:49.793159   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:20:49.815885   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:20:49.838054   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:20:49.860795   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:20:49.883190   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:20:49.904730   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:20:49.926794   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:20:49.948848   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:20:49.971805   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:20:49.993511   42258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:20:50.008446   42258 ssh_runner.go:195] Run: openssl version
	I0725 18:20:50.013733   42258 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0725 18:20:50.013809   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:20:50.024202   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028186   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028284   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028348   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.033433   42258 command_runner.go:130] > 3ec20f2e
	I0725 18:20:50.033643   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:20:50.042362   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:20:50.052085   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.055988   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.056013   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.056041   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.061249   42258 command_runner.go:130] > b5213941
	I0725 18:20:50.061299   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:20:50.069947   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:20:50.083490   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102729   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102762   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102810   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.115563   42258 command_runner.go:130] > 51391683
	I0725 18:20:50.115630   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:20:50.163470   42258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:20:50.179418   42258 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:20:50.179443   42258 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0725 18:20:50.179452   42258 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0725 18:20:50.179458   42258 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0725 18:20:50.179465   42258 command_runner.go:130] > Access: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179470   42258 command_runner.go:130] > Modify: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179475   42258 command_runner.go:130] > Change: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179479   42258 command_runner.go:130] >  Birth: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179570   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:20:50.186673   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.186740   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:20:50.193572   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.193709   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:20:50.200948   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.201200   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:20:50.207041   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.208275   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:20:50.217642   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.217724   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:20:50.227637   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.227703   42258 kubeadm.go:392] StartCluster: {Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:20:50.227835   42258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:20:50.227879   42258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:20:50.272974   42258 command_runner.go:130] > 92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf
	I0725 18:20:50.273005   42258 command_runner.go:130] > 74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314
	I0725 18:20:50.273014   42258 command_runner.go:130] > fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e
	I0725 18:20:50.273026   42258 command_runner.go:130] > 393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3
	I0725 18:20:50.273035   42258 command_runner.go:130] > 28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19
	I0725 18:20:50.273045   42258 command_runner.go:130] > 2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879
	I0725 18:20:50.273055   42258 command_runner.go:130] > 79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7
	I0725 18:20:50.273066   42258 command_runner.go:130] > a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601
	I0725 18:20:50.278353   42258 cri.go:89] found id: "92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf"
	I0725 18:20:50.278386   42258 cri.go:89] found id: "74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314"
	I0725 18:20:50.278392   42258 cri.go:89] found id: "fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e"
	I0725 18:20:50.278396   42258 cri.go:89] found id: "393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3"
	I0725 18:20:50.278400   42258 cri.go:89] found id: "28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19"
	I0725 18:20:50.278405   42258 cri.go:89] found id: "2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879"
	I0725 18:20:50.278409   42258 cri.go:89] found id: "79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7"
	I0725 18:20:50.278413   42258 cri.go:89] found id: "a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601"
	I0725 18:20:50.278416   42258 cri.go:89] found id: ""
	I0725 18:20:50.278467   42258 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.846376684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931755846338686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b87e036-7c34-4932-80ae-19b1b343fe68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.846817596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2da8ace-8686-4725-af4a-78aa319b3ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.846922957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2da8ace-8686-4725-af4a-78aa319b3ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.847252242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2da8ace-8686-4725-af4a-78aa319b3ba5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.888080694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afb4e478-2b45-447e-852e-d6a5795beda6 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.888163049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afb4e478-2b45-447e-852e-d6a5795beda6 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.889212580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35754dc6-fc32-4b6f-b81a-ebad598dc2b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.889648420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931755889626231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35754dc6-fc32-4b6f-b81a-ebad598dc2b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.890188869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95fbdbf8-b8c5-4739-a706-29483de5b902 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.890250133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95fbdbf8-b8c5-4739-a706-29483de5b902 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.890744937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95fbdbf8-b8c5-4739-a706-29483de5b902 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.927988736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cca36a59-4e79-430d-a2e0-0eb2d5717444 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.928070910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cca36a59-4e79-430d-a2e0-0eb2d5717444 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.929422260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dead22a-1d30-4b9a-85a2-bca1f651680e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.929836480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931755929811254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dead22a-1d30-4b9a-85a2-bca1f651680e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.930480489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa3822c8-849c-494c-a9e2-79cea9b2ca34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.930553988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa3822c8-849c-494c-a9e2-79cea9b2ca34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.930939844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa3822c8-849c-494c-a9e2-79cea9b2ca34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.970643568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afa3eeec-e1d9-48e8-a515-8f17bd155342 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.970727551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afa3eeec-e1d9-48e8-a515-8f17bd155342 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.971652101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c1ebfd5-2c35-4753-b2ab-8d89d717e3a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.972199537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931755972175379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c1ebfd5-2c35-4753-b2ab-8d89d717e3a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.972880844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baff5bfc-b505-4d12-b4b4-a287dec805dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.972984197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baff5bfc-b505-4d12-b4b4-a287dec805dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:22:35 multinode-253131 crio[2878]: time="2024-07-25 18:22:35.973353813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baff5bfc-b505-4d12-b4b4-a287dec805dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	682a55d67f4d2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   319e7aaaf7f9b       busybox-fc5497c4f-gfbkg
	061828a7da84f       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   0eec97385011b       kindnet-hvwf2
	175606982c3a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a8adf961fc6ba       storage-provisioner
	9c30dbb647c58       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   33dc8c79f7918       kube-proxy-zgrbq
	31dceff06347a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   bc1d6b028edd9       coredns-7db6d8ff4d-6lrr5
	95f1dc59987d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   7e22c7dcc3341       etcd-multinode-253131
	268ab6f9cddbc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   914bb26efea4c       kube-apiserver-multinode-253131
	33f021a008e5e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   5caa427ed6f28       kube-scheduler-multinode-253131
	43b7d2bfc585b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   6de88394a04d9       kube-controller-manager-multinode-253131
	167ef00cd9d31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   62c1f7c61ac0b       busybox-fc5497c4f-gfbkg
	92575c8e1c68f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   460272c4df475       coredns-7db6d8ff4d-6lrr5
	74171fcbfdd95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   f21693e764cdc       storage-provisioner
	fd663c148a619       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   4229d7f04f2cc       kindnet-hvwf2
	393e599ba9386       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   edcbd05703aa0       kube-proxy-zgrbq
	28021ff9ef2d5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   3530f05aeb7ba       kube-scheduler-multinode-253131
	2c878462f2ec4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   4498f66aec645       etcd-multinode-253131
	79df99dc269c4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   45ca4744c187c       kube-controller-manager-multinode-253131
	a7e2ce3e3194e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   a235fb6d24116       kube-apiserver-multinode-253131
	
	
	==> coredns [31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50178 - 41374 "HINFO IN 4171186552993392796.211708472645225929. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012901345s
	
	
	==> coredns [92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf] <==
	[INFO] 10.244.0.3:53535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001924562s
	[INFO] 10.244.0.3:40356 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000041007s
	[INFO] 10.244.0.3:38393 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000026573s
	[INFO] 10.244.0.3:48020 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291991s
	[INFO] 10.244.0.3:43625 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000672s
	[INFO] 10.244.0.3:34247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028344s
	[INFO] 10.244.0.3:49942 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000026315s
	[INFO] 10.244.1.2:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010389s
	[INFO] 10.244.1.2:35238 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072557s
	[INFO] 10.244.1.2:35287 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006109s
	[INFO] 10.244.1.2:58908 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057223s
	[INFO] 10.244.0.3:54929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113877s
	[INFO] 10.244.0.3:53283 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055106s
	[INFO] 10.244.0.3:47891 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036347s
	[INFO] 10.244.0.3:49543 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040581s
	[INFO] 10.244.1.2:35077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121149s
	[INFO] 10.244.1.2:53263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000241449s
	[INFO] 10.244.1.2:41010 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147928s
	[INFO] 10.244.1.2:38095 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188619s
	[INFO] 10.244.0.3:40730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125004s
	[INFO] 10.244.0.3:60796 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121839s
	[INFO] 10.244.0.3:41629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109043s
	[INFO] 10.244.0.3:52087 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088488s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-253131
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-253131
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=multinode-253131
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_14_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:14:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-253131
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:22:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    multinode-253131
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6d6d4c867ba4a5d817cd83a319b5b8c
	  System UUID:                d6d6d4c8-67ba-4a5d-817c-d83a319b5b8c
	  Boot ID:                    f0bb354f-9a8c-4409-83f9-236961443b72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gfbkg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 coredns-7db6d8ff4d-6lrr5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m11s
	  kube-system                 etcd-multinode-253131                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m25s
	  kube-system                 kindnet-hvwf2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-multinode-253131             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-controller-manager-multinode-253131    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-proxy-zgrbq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-scheduler-multinode-253131             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m9s                   kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  Starting                 8m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s (x8 over 8m30s)  kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x8 over 8m30s)  kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x7 over 8m30s)  kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m25s                  kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s                  kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m25s                  kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m12s                  node-controller  Node multinode-253131 event: Registered Node multinode-253131 in Controller
	  Normal  NodeReady                7m56s                  kubelet          Node multinode-253131 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)    kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)    kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)    kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                    node-controller  Node multinode-253131 event: Registered Node multinode-253131 in Controller
	
	
	Name:               multinode-253131-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-253131-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=multinode-253131
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T18_21_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:21:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-253131-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:21:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:21:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:21:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:21:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    multinode-253131-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee4045f58eb48c4aa38ef605c6033e3
	  System UUID:                6ee4045f-58eb-48c4-aa38-ef605c6033e3
	  Boot ID:                    c7b54ae9-5cca-4a0c-b9e7-a523e34cc176
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9c2k9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-zd9dg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m28s
	  kube-system                 kube-proxy-rhvxz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m22s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m28s (x2 over 7m28s)  kubelet     Node multinode-253131-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s (x2 over 7m28s)  kubelet     Node multinode-253131-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s (x2 over 7m28s)  kubelet     Node multinode-253131-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-253131-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-253131-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-253131-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-253131-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-253131-m02 status is now: NodeReady
	
	
	Name:               multinode-253131-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-253131-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=multinode-253131
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T18_22_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:22:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-253131-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:22:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:22:33 +0000   Thu, 25 Jul 2024 18:22:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:22:33 +0000   Thu, 25 Jul 2024 18:22:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:22:33 +0000   Thu, 25 Jul 2024 18:22:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:22:33 +0000   Thu, 25 Jul 2024 18:22:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    multinode-253131-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2374ae77ceb046e59ca8f2ccb643694a
	  System UUID:                2374ae77-ceb0-46e5-9ca8-f2ccb643694a
	  Boot ID:                    48abe47d-811f-4e7f-bb30-32a17cd2ae91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4hhvf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m34s
	  kube-system                 kube-proxy-st44z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node multinode-253131-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-253131-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-253131-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet          Node multinode-253131-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-253131-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-253131-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-253131-m03 event: Registered Node multinode-253131-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-253131-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.067914] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063994] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.208517] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.135063] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.253734] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[Jul25 18:14] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.850901] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.068145] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.016039] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.083892] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.127659] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +0.121732] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.012814] kauditd_printk_skb: 59 callbacks suppressed
	[Jul25 18:15] kauditd_printk_skb: 12 callbacks suppressed
	[Jul25 18:20] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.144900] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.155896] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.135666] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.262065] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +1.476617] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +2.254542] systemd-fstab-generator[3146]: Ignoring "noauto" option for root device
	[  +0.825036] kauditd_printk_skb: 149 callbacks suppressed
	[Jul25 18:21] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.110764] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +20.059547] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879] <==
	{"level":"info","ts":"2024-07-25T18:15:18.202255Z","caller":"traceutil/trace.go:171","msg":"trace[1132435649] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:521; }","duration":"202.262362ms","start":"2024-07-25T18:15:17.999966Z","end":"2024-07-25T18:15:18.202229Z","steps":["trace[1132435649] 'agreement among raft nodes before linearized reading'  (duration: 201.564303ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.201712Z","caller":"traceutil/trace.go:171","msg":"trace[1125777536] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"222.960569ms","start":"2024-07-25T18:15:17.978736Z","end":"2024-07-25T18:15:18.201697Z","steps":["trace[1125777536] 'process raft request'  (duration: 222.520388ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.455669Z","caller":"traceutil/trace.go:171","msg":"trace[36567743] linearizableReadLoop","detail":"{readStateIndex:546; appliedIndex:545; }","duration":"179.486757ms","start":"2024-07-25T18:15:18.276168Z","end":"2024-07-25T18:15:18.455655Z","steps":["trace[36567743] 'read index received'  (duration: 113.770988ms)","trace[36567743] 'applied index is now lower than readState.Index'  (duration: 65.714927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:15:18.455834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.6521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-07-25T18:15:18.455924Z","caller":"traceutil/trace.go:171","msg":"trace[1693464086] range","detail":"{range_begin:/registry/minions/multinode-253131-m02; range_end:; response_count:1; response_revision:522; }","duration":"179.738515ms","start":"2024-07-25T18:15:18.276144Z","end":"2024-07-25T18:15:18.455883Z","steps":["trace[1693464086] 'agreement among raft nodes before linearized reading'  (duration: 179.57391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.456132Z","caller":"traceutil/trace.go:171","msg":"trace[1945710325] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"248.038844ms","start":"2024-07-25T18:15:18.208083Z","end":"2024-07-25T18:15:18.456121Z","steps":["trace[1945710325] 'process raft request'  (duration: 181.957956ms)","trace[1945710325] 'compare'  (duration: 65.406256ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:16:02.000427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.344708ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7068277479603876475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-253131-m03.17e5877348105532\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-253131-m03.17e5877348105532\" value_size:646 lease:7068277479603876095 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T18:16:02.000646Z","caller":"traceutil/trace.go:171","msg":"trace[691096272] linearizableReadLoop","detail":"{readStateIndex:640; appliedIndex:638; }","duration":"107.800102ms","start":"2024-07-25T18:16:01.892823Z","end":"2024-07-25T18:16:02.000623Z","steps":["trace[691096272] 'read index received'  (duration: 105.489754ms)","trace[691096272] 'applied index is now lower than readState.Index'  (duration: 2.309459ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T18:16:02.000655Z","caller":"traceutil/trace.go:171","msg":"trace[1269904372] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"246.627151ms","start":"2024-07-25T18:16:01.754012Z","end":"2024-07-25T18:16:02.000639Z","steps":["trace[1269904372] 'process raft request'  (duration: 55.020776ms)","trace[1269904372] 'compare'  (duration: 191.171658ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T18:16:02.000768Z","caller":"traceutil/trace.go:171","msg":"trace[939474893] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"168.100067ms","start":"2024-07-25T18:16:01.832662Z","end":"2024-07-25T18:16:02.000762Z","steps":["trace[939474893] 'process raft request'  (duration: 167.913184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:16:02.001001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.173093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-25T18:16:02.001037Z","caller":"traceutil/trace.go:171","msg":"trace[595896803] range","detail":"{range_begin:/registry/minions/multinode-253131-m03; range_end:; response_count:1; response_revision:607; }","duration":"108.234708ms","start":"2024-07-25T18:16:01.892796Z","end":"2024-07-25T18:16:02.00103Z","steps":["trace[595896803] 'agreement among raft nodes before linearized reading'  (duration: 108.065704ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:16:10.268984Z","caller":"traceutil/trace.go:171","msg":"trace[52826067] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"212.53219ms","start":"2024-07-25T18:16:10.056342Z","end":"2024-07-25T18:16:10.268874Z","steps":["trace[52826067] 'process raft request'  (duration: 212.427547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:16:10.62156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.934133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m03\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-07-25T18:16:10.621653Z","caller":"traceutil/trace.go:171","msg":"trace[1908371880] range","detail":"{range_begin:/registry/minions/multinode-253131-m03; range_end:; response_count:1; response_revision:650; }","duration":"116.072543ms","start":"2024-07-25T18:16:10.505565Z","end":"2024-07-25T18:16:10.621638Z","steps":["trace[1908371880] 'range keys from in-memory index tree'  (duration: 115.811209ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:19:16.100916Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-25T18:19:16.101039Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-253131","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	{"level":"warn","ts":"2024-07-25T18:19:16.101174Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.10128Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.182984Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.183026Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T18:19:16.183118Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"731f5c40d4af6217","current-leader-member-id":"731f5c40d4af6217"}
	{"level":"info","ts":"2024-07-25T18:19:16.188328Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:19:16.188619Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:19:16.188677Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-253131","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> etcd [95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942] <==
	{"level":"info","ts":"2024-07-25T18:20:53.264547Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:20:53.267204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 switched to configuration voters=(8295450472155669015)"}
	{"level":"info","ts":"2024-07-25T18:20:53.267288Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","added-peer-id":"731f5c40d4af6217","added-peer-peer-urls":["https://192.168.39.54:2380"]}
	{"level":"info","ts":"2024-07-25T18:20:53.267455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:20:53.267497Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:20:53.278649Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:20:53.27894Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"731f5c40d4af6217","initial-advertise-peer-urls":["https://192.168.39.54:2380"],"listen-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:20:53.278985Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:20:53.290371Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:20:53.290402Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:20:54.402958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.403015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.403066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgPreVoteResp from 731f5c40d4af6217 at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.40308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgVoteResp from 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 731f5c40d4af6217 elected leader 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.408274Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"731f5c40d4af6217","local-member-attributes":"{Name:multinode-253131 ClientURLs:[https://192.168.39.54:2379]}","request-path":"/0/members/731f5c40d4af6217/attributes","cluster-id":"ad335f297da439ca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:20:54.408424Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:20:54.411708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.54:2379"}
	{"level":"info","ts":"2024-07-25T18:20:54.41272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:20:54.41458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:20:54.416214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:20:54.416242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:21:39.067852Z","caller":"traceutil/trace.go:171","msg":"trace[450221974] transaction","detail":"{read_only:false; response_revision:1062; number_of_response:1; }","duration":"163.238648ms","start":"2024-07-25T18:21:38.904568Z","end":"2024-07-25T18:21:39.067807Z","steps":["trace[450221974] 'process raft request'  (duration: 162.699262ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:22:36 up 9 min,  0 users,  load average: 0.48, 0.34, 0.16
	Linux multinode-253131 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59] <==
	I0725 18:21:47.740337       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:21:57.739639       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:21:57.739762       1 main.go:299] handling current node
	I0725 18:21:57.739791       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:21:57.739809       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:21:57.740066       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:21:57.740112       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:22:07.741981       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:22:07.742102       1 main.go:299] handling current node
	I0725 18:22:07.742131       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:22:07.742184       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:22:07.742364       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:22:07.742389       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:22:17.742307       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:22:17.742359       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:22:17.742498       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:22:17.742514       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.2.0/24] 
	I0725 18:22:17.742580       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:22:17.742601       1 main.go:299] handling current node
	I0725 18:22:27.739638       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:22:27.739715       1 main.go:299] handling current node
	I0725 18:22:27.739743       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:22:27.739749       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:22:27.740002       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:22:27.740025       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e] <==
	I0725 18:18:30.948677       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:40.956249       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:18:40.956360       1 main.go:299] handling current node
	I0725 18:18:40.956395       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:18:40.956417       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:18:40.956663       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:18:40.956700       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:50.957205       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:18:50.957263       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:50.957414       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:18:50.957433       1 main.go:299] handling current node
	I0725 18:18:50.957452       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:18:50.957456       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:00.957186       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:19:00.957351       1 main.go:299] handling current node
	I0725 18:19:00.957391       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:19:00.957415       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:00.957597       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:19:00.957674       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:19:10.956421       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:19:10.956545       1 main.go:299] handling current node
	I0725 18:19:10.956586       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:19:10.956609       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:10.956834       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:19:10.956876       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3] <==
	I0725 18:20:55.694098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 18:20:55.694198       1 policy_source.go:224] refreshing policies
	I0725 18:20:55.701755       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:20:55.720087       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 18:20:55.720128       1 aggregator.go:165] initial CRD sync complete...
	I0725 18:20:55.720152       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 18:20:55.720158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 18:20:55.720163       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:20:55.779568       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 18:20:55.786099       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 18:20:55.786979       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:20:55.787585       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 18:20:55.788048       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:20:55.788451       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 18:20:55.788491       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0725 18:20:55.792991       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 18:20:55.793720       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0725 18:20:56.598464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:20:57.632361       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 18:20:57.796247       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 18:20:57.816507       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 18:20:57.875324       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:20:57.882004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:21:08.188531       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 18:21:08.211681       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601] <==
	W0725 18:19:16.124249       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124275       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124299       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124329       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124386       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124419       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124445       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124475       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124506       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124532       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124559       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124590       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124622       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124646       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124670       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124698       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124724       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124748       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124773       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124826       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.128798       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.128986       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.129255       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 18:19:16.130759       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0725 18:19:16.132521       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90] <==
	I0725 18:21:08.579999       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:21:08.629536       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:21:08.629560       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0725 18:21:30.440367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.594852ms"
	I0725 18:21:30.464538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.122368ms"
	I0725 18:21:30.464613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.886µs"
	I0725 18:21:34.664106       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m02\" does not exist"
	I0725 18:21:34.675078       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m02" podCIDRs=["10.244.1.0/24"]
	I0725 18:21:36.548931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.606µs"
	I0725 18:21:36.594389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.503µs"
	I0725 18:21:36.605002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.415µs"
	I0725 18:21:36.607490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.502µs"
	I0725 18:21:36.612412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.327µs"
	I0725 18:21:36.613792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.346µs"
	I0725 18:21:39.071707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.427µs"
	I0725 18:21:54.384302       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:21:54.403008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.072µs"
	I0725 18:21:54.415804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.1µs"
	I0725 18:21:58.032222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.009519ms"
	I0725 18:21:58.033460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="873.427µs"
	I0725 18:22:12.803792       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:22:13.835221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:22:13.836225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:22:13.843063       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.2.0/24"]
	I0725 18:22:33.168310       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	
	
	==> kube-controller-manager [79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7] <==
	I0725 18:15:08.781329       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m02\" does not exist"
	I0725 18:15:08.822203       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m02" podCIDRs=["10.244.1.0/24"]
	I0725 18:15:09.654797       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-253131-m02"
	I0725 18:15:28.691930       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:15:30.839196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.863807ms"
	I0725 18:15:30.864828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.444272ms"
	I0725 18:15:30.878766       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.69927ms"
	I0725 18:15:30.878876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.7µs"
	I0725 18:15:35.034876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.518022ms"
	I0725 18:15:35.035278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.371µs"
	I0725 18:15:35.221477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.110118ms"
	I0725 18:15:35.222249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.735µs"
	I0725 18:16:02.006186       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:16:02.006387       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:02.074144       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.2.0/24"]
	I0725 18:16:04.678154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-253131-m03"
	I0725 18:16:22.489416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:50.296658       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:51.329821       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:51.330465       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:16:51.342917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.3.0/24"]
	I0725 18:17:10.723060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:17:54.730318       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m03"
	I0725 18:17:54.792496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.287274ms"
	I0725 18:17:54.792576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.962µs"
	
	
	==> kube-proxy [393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3] <==
	I0725 18:14:26.682140       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:14:26.697155       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0725 18:14:26.776390       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:14:26.776450       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:14:26.776475       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:14:26.782758       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:14:26.783208       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:14:26.783532       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:14:26.786764       1 config.go:192] "Starting service config controller"
	I0725 18:14:26.787015       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:14:26.787573       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:14:26.787625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:14:26.791200       1 config.go:319] "Starting node config controller"
	I0725 18:14:26.791242       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:14:26.887986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:14:26.887997       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:14:26.891475       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2] <==
	I0725 18:20:56.889837       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:20:56.902455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0725 18:20:56.962942       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:20:56.962979       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:20:56.962995       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:20:56.974638       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:20:56.976018       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:20:56.976873       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:20:56.978871       1 config.go:192] "Starting service config controller"
	I0725 18:20:56.979664       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:20:56.979744       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:20:56.979763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:20:56.980238       1 config.go:319] "Starting node config controller"
	I0725 18:20:56.980275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:20:57.080823       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:20:57.080969       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:20:57.080978       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19] <==
	E0725 18:14:09.427813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:09.427869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:09.427932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:09.427941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:09.427949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.279873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.279950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.324670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:14:10.324745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:14:10.342879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:14:10.342984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:14:10.384120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:14:10.384214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:14:10.447105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.447147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.465919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 18:14:10.467353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 18:14:10.488393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.488436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.594472       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 18:14:10.594547       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:14:10.723094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 18:14:10.723211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0725 18:14:13.318554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 18:19:16.112063       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803] <==
	W0725 18:20:55.687337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:20:55.687363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:20:55.687406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 18:20:55.687429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 18:20:55.687564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 18:20:55.687589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 18:20:55.693157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 18:20:55.693192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 18:20:55.693280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.693340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.693465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 18:20:55.693571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 18:20:55.693653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:20:55.693677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:20:55.693720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:20:55.693742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:20:55.693584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.694672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 18:20:55.694732       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 18:20:55.704161       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 18:20:55.704194       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0725 18:20:56.769758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:20:53 multinode-253131 kubelet[3153]: I0725 18:20:53.610336    3153 kubelet_node_status.go:73] "Attempting to register node" node="multinode-253131"
	Jul 25 18:20:55 multinode-253131 kubelet[3153]: I0725 18:20:55.721471    3153 kubelet_node_status.go:112] "Node was previously registered" node="multinode-253131"
	Jul 25 18:20:55 multinode-253131 kubelet[3153]: I0725 18:20:55.721575    3153 kubelet_node_status.go:76] "Successfully registered node" node="multinode-253131"
	Jul 25 18:20:55 multinode-253131 kubelet[3153]: I0725 18:20:55.722806    3153 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 25 18:20:55 multinode-253131 kubelet[3153]: I0725 18:20:55.723684    3153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.094967    3153 apiserver.go:52] "Watching apiserver"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.106010    3153 topology_manager.go:215] "Topology Admit Handler" podUID="2d1b1ec9-65be-45a4-bc80-f2f13f2349bc" podNamespace="kube-system" podName="kindnet-hvwf2"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.106166    3153 topology_manager.go:215] "Topology Admit Handler" podUID="5bd539cc-9683-496d-9aea-545539fcf647" podNamespace="kube-system" podName="kube-proxy-zgrbq"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.106241    3153 topology_manager.go:215] "Topology Admit Handler" podUID="76b677de-805b-44fc-930b-ee22b62f899d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6lrr5"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.106285    3153 topology_manager.go:215] "Topology Admit Handler" podUID="388889a4-653d-4351-b19e-454285b56dd5" podNamespace="kube-system" podName="storage-provisioner"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.106334    3153 topology_manager.go:215] "Topology Admit Handler" podUID="867fbc0d-ad43-47e9-9bb1-a83711108175" podNamespace="default" podName="busybox-fc5497c4f-gfbkg"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.199671    3153 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239055    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d1b1ec9-65be-45a4-bc80-f2f13f2349bc-lib-modules\") pod \"kindnet-hvwf2\" (UID: \"2d1b1ec9-65be-45a4-bc80-f2f13f2349bc\") " pod="kube-system/kindnet-hvwf2"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239186    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2d1b1ec9-65be-45a4-bc80-f2f13f2349bc-cni-cfg\") pod \"kindnet-hvwf2\" (UID: \"2d1b1ec9-65be-45a4-bc80-f2f13f2349bc\") " pod="kube-system/kindnet-hvwf2"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239237    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d1b1ec9-65be-45a4-bc80-f2f13f2349bc-xtables-lock\") pod \"kindnet-hvwf2\" (UID: \"2d1b1ec9-65be-45a4-bc80-f2f13f2349bc\") " pod="kube-system/kindnet-hvwf2"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239286    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bd539cc-9683-496d-9aea-545539fcf647-xtables-lock\") pod \"kube-proxy-zgrbq\" (UID: \"5bd539cc-9683-496d-9aea-545539fcf647\") " pod="kube-system/kube-proxy-zgrbq"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239363    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/388889a4-653d-4351-b19e-454285b56dd5-tmp\") pod \"storage-provisioner\" (UID: \"388889a4-653d-4351-b19e-454285b56dd5\") " pod="kube-system/storage-provisioner"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239412    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bd539cc-9683-496d-9aea-545539fcf647-lib-modules\") pod \"kube-proxy-zgrbq\" (UID: \"5bd539cc-9683-496d-9aea-545539fcf647\") " pod="kube-system/kube-proxy-zgrbq"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.407285    3153 scope.go:117] "RemoveContainer" containerID="92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf"
	Jul 25 18:21:02 multinode-253131 kubelet[3153]: I0725 18:21:02.537337    3153 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 25 18:21:52 multinode-253131 kubelet[3153]: E0725 18:21:52.203130    3153 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:22:35.584556   43363 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19326-5877/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-253131 -n multinode-253131
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-253131 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 stop
E0725 18:24:12.057423   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-253131 stop: exit status 82 (2m0.459743328s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-253131-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-253131 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-253131 status: exit status 3 (18.653408047s)

                                                
                                                
-- stdout --
	multinode-253131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-253131-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:24:58.784656   44036 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.179:22: connect: no route to host
	E0725 18:24:58.784691   44036 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.179:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-253131 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-253131 -n multinode-253131
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-253131 logs -n 25: (1.381967695s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131:/home/docker/cp-test_multinode-253131-m02_multinode-253131.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131 sudo cat                                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m02_multinode-253131.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03:/home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131-m03 sudo cat                                   | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp testdata/cp-test.txt                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131:/home/docker/cp-test_multinode-253131-m03_multinode-253131.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131 sudo cat                                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02:/home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131-m02 sudo cat                                   | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-253131 node stop m03                                                          | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	| node    | multinode-253131 node start                                                             | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| stop    | -p multinode-253131                                                                     | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| start   | -p multinode-253131                                                                     | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:19 UTC | 25 Jul 24 18:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC |                     |
	| node    | multinode-253131 node delete                                                            | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC | 25 Jul 24 18:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-253131 stop                                                                   | multinode-253131 | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:19:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:19:15.086550   42258 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:19:15.086824   42258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:19:15.086834   42258 out.go:304] Setting ErrFile to fd 2...
	I0725 18:19:15.086839   42258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:19:15.086983   42258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:19:15.087462   42258 out.go:298] Setting JSON to false
	I0725 18:19:15.088390   42258 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3699,"bootTime":1721927856,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:19:15.088444   42258 start.go:139] virtualization: kvm guest
	I0725 18:19:15.090373   42258 out.go:177] * [multinode-253131] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:19:15.091748   42258 notify.go:220] Checking for updates...
	I0725 18:19:15.091752   42258 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:19:15.092976   42258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:19:15.094159   42258 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:19:15.095218   42258 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:19:15.096280   42258 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:19:15.097455   42258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:19:15.098900   42258 config.go:182] Loaded profile config "multinode-253131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:19:15.099003   42258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:19:15.099436   42258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:19:15.099485   42258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:19:15.114394   42258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
	I0725 18:19:15.114849   42258 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:19:15.115509   42258 main.go:141] libmachine: Using API Version  1
	I0725 18:19:15.115546   42258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:19:15.115874   42258 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:19:15.116075   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.152069   42258 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:19:15.153232   42258 start.go:297] selected driver: kvm2
	I0725 18:19:15.153246   42258 start.go:901] validating driver "kvm2" against &{Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:19:15.153392   42258 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:19:15.153724   42258 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:19:15.153793   42258 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:19:15.168318   42258 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:19:15.168955   42258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:19:15.169012   42258 cni.go:84] Creating CNI manager for ""
	I0725 18:19:15.169023   42258 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0725 18:19:15.169124   42258 start.go:340] cluster config:
	{Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:19:15.169319   42258 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:19:15.170774   42258 out.go:177] * Starting "multinode-253131" primary control-plane node in "multinode-253131" cluster
	I0725 18:19:15.171798   42258 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:19:15.171828   42258 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:19:15.171835   42258 cache.go:56] Caching tarball of preloaded images
	I0725 18:19:15.171921   42258 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:19:15.171932   42258 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:19:15.172039   42258 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/config.json ...
	I0725 18:19:15.172219   42258 start.go:360] acquireMachinesLock for multinode-253131: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:19:15.172262   42258 start.go:364] duration metric: took 26.613µs to acquireMachinesLock for "multinode-253131"
	I0725 18:19:15.172275   42258 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:19:15.172281   42258 fix.go:54] fixHost starting: 
	I0725 18:19:15.172642   42258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:19:15.172674   42258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:19:15.187004   42258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0725 18:19:15.187488   42258 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:19:15.188050   42258 main.go:141] libmachine: Using API Version  1
	I0725 18:19:15.188073   42258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:19:15.188455   42258 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:19:15.188666   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.188864   42258 main.go:141] libmachine: (multinode-253131) Calling .GetState
	I0725 18:19:15.190705   42258 fix.go:112] recreateIfNeeded on multinode-253131: state=Running err=<nil>
	W0725 18:19:15.190743   42258 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:19:15.192604   42258 out.go:177] * Updating the running kvm2 "multinode-253131" VM ...
	I0725 18:19:15.193868   42258 machine.go:94] provisionDockerMachine start ...
	I0725 18:19:15.193896   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:19:15.194104   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.196825   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.197356   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.197382   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.197451   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.197618   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.197790   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.197936   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.198135   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.198364   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.198379   42258 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:19:15.313406   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-253131
	
	I0725 18:19:15.313438   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.313722   42258 buildroot.go:166] provisioning hostname "multinode-253131"
	I0725 18:19:15.313764   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.313990   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.316786   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.317196   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.317220   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.317366   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.317554   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.317722   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.317882   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.318040   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.318225   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.318242   42258 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-253131 && echo "multinode-253131" | sudo tee /etc/hostname
	I0725 18:19:15.438825   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-253131
	
	I0725 18:19:15.438858   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.441856   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.442269   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.442298   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.442484   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.442693   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.442862   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.443010   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.443161   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.443320   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.443336   42258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-253131' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-253131/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-253131' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:19:15.553442   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:19:15.553472   42258 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:19:15.553496   42258 buildroot.go:174] setting up certificates
	I0725 18:19:15.553504   42258 provision.go:84] configureAuth start
	I0725 18:19:15.553512   42258 main.go:141] libmachine: (multinode-253131) Calling .GetMachineName
	I0725 18:19:15.553819   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:19:15.556453   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.556907   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.556949   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.557104   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.559407   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.559746   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.559778   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.559953   42258 provision.go:143] copyHostCerts
	I0725 18:19:15.559979   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:19:15.560010   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:19:15.560021   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:19:15.560102   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:19:15.560212   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:19:15.560235   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:19:15.560244   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:19:15.560284   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:19:15.560364   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:19:15.560389   42258 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:19:15.560398   42258 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:19:15.560430   42258 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:19:15.560497   42258 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.multinode-253131 san=[127.0.0.1 192.168.39.54 localhost minikube multinode-253131]
	I0725 18:19:15.819885   42258 provision.go:177] copyRemoteCerts
	I0725 18:19:15.819947   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:19:15.819969   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.822753   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.823062   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.823084   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.823246   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.823444   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.823622   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.823836   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:19:15.907092   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 18:19:15.907176   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:19:15.930803   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 18:19:15.930862   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0725 18:19:15.955192   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 18:19:15.955252   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:19:15.978798   42258 provision.go:87] duration metric: took 425.282174ms to configureAuth
	I0725 18:19:15.978831   42258 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:19:15.979099   42258 config.go:182] Loaded profile config "multinode-253131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:19:15.979161   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:19:15.981798   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.982245   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:19:15.982272   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:19:15.982446   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:19:15.982668   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.982831   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:19:15.982973   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:19:15.983163   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:19:15.983364   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:19:15.983384   42258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:20:46.827634   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:20:46.827663   42258 machine.go:97] duration metric: took 1m31.633778148s to provisionDockerMachine
	I0725 18:20:46.827679   42258 start.go:293] postStartSetup for "multinode-253131" (driver="kvm2")
	I0725 18:20:46.827690   42258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:20:46.827705   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:46.827984   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:20:46.828007   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:46.831114   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.831514   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:46.831533   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.831688   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:46.831908   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.832090   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:46.832261   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:46.919844   42258 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:20:46.923850   42258 command_runner.go:130] > NAME=Buildroot
	I0725 18:20:46.923874   42258 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0725 18:20:46.923881   42258 command_runner.go:130] > ID=buildroot
	I0725 18:20:46.923888   42258 command_runner.go:130] > VERSION_ID=2023.02.9
	I0725 18:20:46.923902   42258 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0725 18:20:46.923944   42258 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:20:46.923959   42258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:20:46.924021   42258 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:20:46.924140   42258 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:20:46.924155   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 18:20:46.924252   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:20:46.933391   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:20:46.955632   42258 start.go:296] duration metric: took 127.938817ms for postStartSetup
	I0725 18:20:46.955676   42258 fix.go:56] duration metric: took 1m31.783393669s for fixHost
	I0725 18:20:46.955709   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:46.958350   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.958763   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:46.958787   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:46.958994   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:46.959235   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.959468   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:46.959636   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:46.959827   42258 main.go:141] libmachine: Using SSH client type: native
	I0725 18:20:46.960003   42258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0725 18:20:46.960025   42258 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:20:47.068825   42258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721931647.042399081
	
	I0725 18:20:47.068855   42258 fix.go:216] guest clock: 1721931647.042399081
	I0725 18:20:47.068885   42258 fix.go:229] Guest: 2024-07-25 18:20:47.042399081 +0000 UTC Remote: 2024-07-25 18:20:46.955680646 +0000 UTC m=+91.903510165 (delta=86.718435ms)
	I0725 18:20:47.068949   42258 fix.go:200] guest clock delta is within tolerance: 86.718435ms
	I0725 18:20:47.068961   42258 start.go:83] releasing machines lock for "multinode-253131", held for 1m31.89668886s
	I0725 18:20:47.068991   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.069258   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:20:47.072177   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.072797   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.072829   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.073080   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073609   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073798   42258 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:20:47.073871   42258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:20:47.073912   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:47.074005   42258 ssh_runner.go:195] Run: cat /version.json
	I0725 18:20:47.074019   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:20:47.076802   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.076863   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077186   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.077210   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077236   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:47.077252   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:47.077348   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:47.077539   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:20:47.077668   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:47.077833   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:47.077845   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:20:47.077974   42258 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:20:47.078031   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:47.078239   42258 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:20:47.189230   42258 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0725 18:20:47.189279   42258 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0725 18:20:47.189389   42258 ssh_runner.go:195] Run: systemctl --version
	I0725 18:20:47.195023   42258 command_runner.go:130] > systemd 252 (252)
	I0725 18:20:47.195054   42258 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0725 18:20:47.195243   42258 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:20:47.354449   42258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 18:20:47.362316   42258 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0725 18:20:47.362447   42258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:20:47.362509   42258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:20:47.371751   42258 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 18:20:47.371773   42258 start.go:495] detecting cgroup driver to use...
	I0725 18:20:47.371838   42258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:20:47.387333   42258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:20:47.401637   42258 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:20:47.401702   42258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:20:47.415261   42258 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:20:47.428015   42258 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:20:47.564805   42258 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:20:47.698067   42258 docker.go:233] disabling docker service ...
	I0725 18:20:47.698143   42258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:20:47.713851   42258 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:20:47.726757   42258 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:20:47.861369   42258 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:20:47.993722   42258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:20:48.006720   42258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:20:48.025342   42258 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0725 18:20:48.025998   42258 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:20:48.026066   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.036062   42258 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:20:48.036124   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.046591   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.056392   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.066083   42258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:20:48.075927   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.086906   42258 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.097952   42258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:20:48.107491   42258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:20:48.116015   42258 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0725 18:20:48.116086   42258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:20:48.124719   42258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:20:48.258031   42258 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:20:49.296213   42258 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.038143083s)
	I0725 18:20:49.296245   42258 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:20:49.296341   42258 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:20:49.301442   42258 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0725 18:20:49.301461   42258 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0725 18:20:49.301467   42258 command_runner.go:130] > Device: 0,22	Inode: 1332        Links: 1
	I0725 18:20:49.301475   42258 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0725 18:20:49.301483   42258 command_runner.go:130] > Access: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301492   42258 command_runner.go:130] > Modify: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301500   42258 command_runner.go:130] > Change: 2024-07-25 18:20:49.162836071 +0000
	I0725 18:20:49.301505   42258 command_runner.go:130] >  Birth: -
	I0725 18:20:49.301523   42258 start.go:563] Will wait 60s for crictl version
	I0725 18:20:49.301573   42258 ssh_runner.go:195] Run: which crictl
	I0725 18:20:49.305125   42258 command_runner.go:130] > /usr/bin/crictl
	I0725 18:20:49.305276   42258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:20:49.343228   42258 command_runner.go:130] > Version:  0.1.0
	I0725 18:20:49.343249   42258 command_runner.go:130] > RuntimeName:  cri-o
	I0725 18:20:49.343256   42258 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0725 18:20:49.343263   42258 command_runner.go:130] > RuntimeApiVersion:  v1
	I0725 18:20:49.343363   42258 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:20:49.343465   42258 ssh_runner.go:195] Run: crio --version
	I0725 18:20:49.369228   42258 command_runner.go:130] > crio version 1.29.1
	I0725 18:20:49.369254   42258 command_runner.go:130] > Version:        1.29.1
	I0725 18:20:49.369269   42258 command_runner.go:130] > GitCommit:      unknown
	I0725 18:20:49.369276   42258 command_runner.go:130] > GitCommitDate:  unknown
	I0725 18:20:49.369283   42258 command_runner.go:130] > GitTreeState:   clean
	I0725 18:20:49.369291   42258 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0725 18:20:49.369298   42258 command_runner.go:130] > GoVersion:      go1.21.6
	I0725 18:20:49.369306   42258 command_runner.go:130] > Compiler:       gc
	I0725 18:20:49.369312   42258 command_runner.go:130] > Platform:       linux/amd64
	I0725 18:20:49.369320   42258 command_runner.go:130] > Linkmode:       dynamic
	I0725 18:20:49.369328   42258 command_runner.go:130] > BuildTags:      
	I0725 18:20:49.369335   42258 command_runner.go:130] >   containers_image_ostree_stub
	I0725 18:20:49.369345   42258 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0725 18:20:49.369351   42258 command_runner.go:130] >   btrfs_noversion
	I0725 18:20:49.369359   42258 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0725 18:20:49.369369   42258 command_runner.go:130] >   libdm_no_deferred_remove
	I0725 18:20:49.369378   42258 command_runner.go:130] >   seccomp
	I0725 18:20:49.369385   42258 command_runner.go:130] > LDFlags:          unknown
	I0725 18:20:49.369395   42258 command_runner.go:130] > SeccompEnabled:   true
	I0725 18:20:49.369402   42258 command_runner.go:130] > AppArmorEnabled:  false
	I0725 18:20:49.370607   42258 ssh_runner.go:195] Run: crio --version
	I0725 18:20:49.396374   42258 command_runner.go:130] > crio version 1.29.1
	I0725 18:20:49.396394   42258 command_runner.go:130] > Version:        1.29.1
	I0725 18:20:49.396401   42258 command_runner.go:130] > GitCommit:      unknown
	I0725 18:20:49.396408   42258 command_runner.go:130] > GitCommitDate:  unknown
	I0725 18:20:49.396414   42258 command_runner.go:130] > GitTreeState:   clean
	I0725 18:20:49.396422   42258 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0725 18:20:49.396428   42258 command_runner.go:130] > GoVersion:      go1.21.6
	I0725 18:20:49.396434   42258 command_runner.go:130] > Compiler:       gc
	I0725 18:20:49.396441   42258 command_runner.go:130] > Platform:       linux/amd64
	I0725 18:20:49.396448   42258 command_runner.go:130] > Linkmode:       dynamic
	I0725 18:20:49.396464   42258 command_runner.go:130] > BuildTags:      
	I0725 18:20:49.396473   42258 command_runner.go:130] >   containers_image_ostree_stub
	I0725 18:20:49.396480   42258 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0725 18:20:49.396486   42258 command_runner.go:130] >   btrfs_noversion
	I0725 18:20:49.396491   42258 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0725 18:20:49.396498   42258 command_runner.go:130] >   libdm_no_deferred_remove
	I0725 18:20:49.396501   42258 command_runner.go:130] >   seccomp
	I0725 18:20:49.396509   42258 command_runner.go:130] > LDFlags:          unknown
	I0725 18:20:49.396513   42258 command_runner.go:130] > SeccompEnabled:   true
	I0725 18:20:49.396521   42258 command_runner.go:130] > AppArmorEnabled:  false
	I0725 18:20:49.398594   42258 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:20:49.400370   42258 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:20:49.403208   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:49.403615   42258 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:20:49.403642   42258 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:20:49.403861   42258 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:20:49.407841   42258 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0725 18:20:49.407941   42258 kubeadm.go:883] updating cluster {Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:20:49.408087   42258 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:20:49.408139   42258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:20:49.449920   42258 command_runner.go:130] > {
	I0725 18:20:49.449942   42258 command_runner.go:130] >   "images": [
	I0725 18:20:49.449946   42258 command_runner.go:130] >     {
	I0725 18:20:49.449954   42258 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0725 18:20:49.449960   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.449965   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0725 18:20:49.449971   42258 command_runner.go:130] >       ],
	I0725 18:20:49.449977   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450004   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0725 18:20:49.450019   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0725 18:20:49.450024   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450033   42258 command_runner.go:130] >       "size": "87165492",
	I0725 18:20:49.450040   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450046   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450056   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450065   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450072   42258 command_runner.go:130] >     },
	I0725 18:20:49.450080   42258 command_runner.go:130] >     {
	I0725 18:20:49.450090   42258 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0725 18:20:49.450100   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450109   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0725 18:20:49.450116   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450124   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450137   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0725 18:20:49.450146   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0725 18:20:49.450150   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450154   42258 command_runner.go:130] >       "size": "87174707",
	I0725 18:20:49.450161   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450173   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450179   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450183   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450189   42258 command_runner.go:130] >     },
	I0725 18:20:49.450192   42258 command_runner.go:130] >     {
	I0725 18:20:49.450200   42258 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0725 18:20:49.450204   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450209   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0725 18:20:49.450217   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450221   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450227   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0725 18:20:49.450236   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0725 18:20:49.450239   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450244   42258 command_runner.go:130] >       "size": "1363676",
	I0725 18:20:49.450250   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450255   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450261   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450265   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450272   42258 command_runner.go:130] >     },
	I0725 18:20:49.450277   42258 command_runner.go:130] >     {
	I0725 18:20:49.450288   42258 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0725 18:20:49.450294   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450299   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0725 18:20:49.450305   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450308   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450316   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0725 18:20:49.450326   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0725 18:20:49.450331   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450334   42258 command_runner.go:130] >       "size": "31470524",
	I0725 18:20:49.450338   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450342   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450346   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450351   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450354   42258 command_runner.go:130] >     },
	I0725 18:20:49.450360   42258 command_runner.go:130] >     {
	I0725 18:20:49.450366   42258 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0725 18:20:49.450373   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450378   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0725 18:20:49.450384   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450387   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450397   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0725 18:20:49.450403   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0725 18:20:49.450409   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450413   42258 command_runner.go:130] >       "size": "61245718",
	I0725 18:20:49.450417   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450421   42258 command_runner.go:130] >       "username": "nonroot",
	I0725 18:20:49.450428   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450432   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450438   42258 command_runner.go:130] >     },
	I0725 18:20:49.450442   42258 command_runner.go:130] >     {
	I0725 18:20:49.450448   42258 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0725 18:20:49.450452   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450457   42258 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0725 18:20:49.450463   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450467   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450476   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0725 18:20:49.450482   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0725 18:20:49.450486   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450491   42258 command_runner.go:130] >       "size": "150779692",
	I0725 18:20:49.450496   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450500   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450503   42258 command_runner.go:130] >       },
	I0725 18:20:49.450507   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450513   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450521   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450526   42258 command_runner.go:130] >     },
	I0725 18:20:49.450530   42258 command_runner.go:130] >     {
	I0725 18:20:49.450535   42258 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0725 18:20:49.450539   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450545   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0725 18:20:49.450550   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450554   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450564   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0725 18:20:49.450572   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0725 18:20:49.450576   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450580   42258 command_runner.go:130] >       "size": "117609954",
	I0725 18:20:49.450597   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450603   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450606   42258 command_runner.go:130] >       },
	I0725 18:20:49.450610   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450614   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450618   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450622   42258 command_runner.go:130] >     },
	I0725 18:20:49.450626   42258 command_runner.go:130] >     {
	I0725 18:20:49.450631   42258 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0725 18:20:49.450638   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450643   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0725 18:20:49.450648   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450652   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450666   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0725 18:20:49.450675   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0725 18:20:49.450679   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450686   42258 command_runner.go:130] >       "size": "112198984",
	I0725 18:20:49.450690   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450693   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450696   42258 command_runner.go:130] >       },
	I0725 18:20:49.450700   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450704   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450708   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450712   42258 command_runner.go:130] >     },
	I0725 18:20:49.450715   42258 command_runner.go:130] >     {
	I0725 18:20:49.450720   42258 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0725 18:20:49.450724   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450729   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0725 18:20:49.450733   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450736   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450743   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0725 18:20:49.450749   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0725 18:20:49.450754   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450758   42258 command_runner.go:130] >       "size": "85953945",
	I0725 18:20:49.450762   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.450765   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450769   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450772   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450775   42258 command_runner.go:130] >     },
	I0725 18:20:49.450778   42258 command_runner.go:130] >     {
	I0725 18:20:49.450784   42258 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0725 18:20:49.450788   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450792   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0725 18:20:49.450795   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450798   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450805   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0725 18:20:49.450812   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0725 18:20:49.450815   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450819   42258 command_runner.go:130] >       "size": "63051080",
	I0725 18:20:49.450822   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450825   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.450828   42258 command_runner.go:130] >       },
	I0725 18:20:49.450831   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450836   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450839   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.450843   42258 command_runner.go:130] >     },
	I0725 18:20:49.450846   42258 command_runner.go:130] >     {
	I0725 18:20:49.450852   42258 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0725 18:20:49.450858   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.450862   42258 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0725 18:20:49.450865   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450869   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.450876   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0725 18:20:49.450885   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0725 18:20:49.450888   42258 command_runner.go:130] >       ],
	I0725 18:20:49.450892   42258 command_runner.go:130] >       "size": "750414",
	I0725 18:20:49.450898   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.450902   42258 command_runner.go:130] >         "value": "65535"
	I0725 18:20:49.450909   42258 command_runner.go:130] >       },
	I0725 18:20:49.450912   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.450918   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.450922   42258 command_runner.go:130] >       "pinned": true
	I0725 18:20:49.450927   42258 command_runner.go:130] >     }
	I0725 18:20:49.450930   42258 command_runner.go:130] >   ]
	I0725 18:20:49.450933   42258 command_runner.go:130] > }
	I0725 18:20:49.451115   42258 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:20:49.451129   42258 crio.go:433] Images already preloaded, skipping extraction
	I0725 18:20:49.451196   42258 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:20:49.481528   42258 command_runner.go:130] > {
	I0725 18:20:49.481546   42258 command_runner.go:130] >   "images": [
	I0725 18:20:49.481553   42258 command_runner.go:130] >     {
	I0725 18:20:49.481563   42258 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0725 18:20:49.481569   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481574   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0725 18:20:49.481578   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481582   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481590   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0725 18:20:49.481597   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0725 18:20:49.481602   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481607   42258 command_runner.go:130] >       "size": "87165492",
	I0725 18:20:49.481615   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481620   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481627   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481630   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481634   42258 command_runner.go:130] >     },
	I0725 18:20:49.481638   42258 command_runner.go:130] >     {
	I0725 18:20:49.481644   42258 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0725 18:20:49.481650   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481655   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0725 18:20:49.481659   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481663   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481671   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0725 18:20:49.481677   42258 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0725 18:20:49.481683   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481687   42258 command_runner.go:130] >       "size": "87174707",
	I0725 18:20:49.481691   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481697   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481703   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481707   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481711   42258 command_runner.go:130] >     },
	I0725 18:20:49.481715   42258 command_runner.go:130] >     {
	I0725 18:20:49.481721   42258 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0725 18:20:49.481725   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481730   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0725 18:20:49.481734   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481738   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481747   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0725 18:20:49.481754   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0725 18:20:49.481758   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481762   42258 command_runner.go:130] >       "size": "1363676",
	I0725 18:20:49.481766   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481774   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481778   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481781   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481785   42258 command_runner.go:130] >     },
	I0725 18:20:49.481789   42258 command_runner.go:130] >     {
	I0725 18:20:49.481796   42258 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0725 18:20:49.481800   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481807   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0725 18:20:49.481811   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481815   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481822   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0725 18:20:49.481834   42258 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0725 18:20:49.481839   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481844   42258 command_runner.go:130] >       "size": "31470524",
	I0725 18:20:49.481850   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481854   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.481860   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481864   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481868   42258 command_runner.go:130] >     },
	I0725 18:20:49.481871   42258 command_runner.go:130] >     {
	I0725 18:20:49.481877   42258 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0725 18:20:49.481884   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481889   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0725 18:20:49.481893   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481897   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481904   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0725 18:20:49.481913   42258 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0725 18:20:49.481917   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481923   42258 command_runner.go:130] >       "size": "61245718",
	I0725 18:20:49.481926   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.481931   42258 command_runner.go:130] >       "username": "nonroot",
	I0725 18:20:49.481935   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.481941   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.481944   42258 command_runner.go:130] >     },
	I0725 18:20:49.481948   42258 command_runner.go:130] >     {
	I0725 18:20:49.481953   42258 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0725 18:20:49.481959   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.481966   42258 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0725 18:20:49.481974   42258 command_runner.go:130] >       ],
	I0725 18:20:49.481979   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.481989   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0725 18:20:49.482002   42258 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0725 18:20:49.482010   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482016   42258 command_runner.go:130] >       "size": "150779692",
	I0725 18:20:49.482022   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482026   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482031   42258 command_runner.go:130] >       },
	I0725 18:20:49.482035   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482040   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482045   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482052   42258 command_runner.go:130] >     },
	I0725 18:20:49.482057   42258 command_runner.go:130] >     {
	I0725 18:20:49.482070   42258 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0725 18:20:49.482079   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482086   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0725 18:20:49.482094   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482101   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482115   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0725 18:20:49.482129   42258 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0725 18:20:49.482137   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482141   42258 command_runner.go:130] >       "size": "117609954",
	I0725 18:20:49.482148   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482152   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482158   42258 command_runner.go:130] >       },
	I0725 18:20:49.482162   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482165   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482171   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482176   42258 command_runner.go:130] >     },
	I0725 18:20:49.482179   42258 command_runner.go:130] >     {
	I0725 18:20:49.482185   42258 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0725 18:20:49.482191   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482197   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0725 18:20:49.482201   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482205   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482221   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0725 18:20:49.482231   42258 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0725 18:20:49.482236   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482241   42258 command_runner.go:130] >       "size": "112198984",
	I0725 18:20:49.482246   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482250   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482256   42258 command_runner.go:130] >       },
	I0725 18:20:49.482260   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482264   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482270   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482274   42258 command_runner.go:130] >     },
	I0725 18:20:49.482277   42258 command_runner.go:130] >     {
	I0725 18:20:49.482283   42258 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0725 18:20:49.482290   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482294   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0725 18:20:49.482299   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482303   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482312   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0725 18:20:49.482319   42258 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0725 18:20:49.482324   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482328   42258 command_runner.go:130] >       "size": "85953945",
	I0725 18:20:49.482332   42258 command_runner.go:130] >       "uid": null,
	I0725 18:20:49.482336   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482340   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482344   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482347   42258 command_runner.go:130] >     },
	I0725 18:20:49.482351   42258 command_runner.go:130] >     {
	I0725 18:20:49.482357   42258 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0725 18:20:49.482363   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482369   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0725 18:20:49.482374   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482378   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482385   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0725 18:20:49.482394   42258 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0725 18:20:49.482398   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482402   42258 command_runner.go:130] >       "size": "63051080",
	I0725 18:20:49.482406   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482410   42258 command_runner.go:130] >         "value": "0"
	I0725 18:20:49.482414   42258 command_runner.go:130] >       },
	I0725 18:20:49.482420   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482425   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482429   42258 command_runner.go:130] >       "pinned": false
	I0725 18:20:49.482432   42258 command_runner.go:130] >     },
	I0725 18:20:49.482439   42258 command_runner.go:130] >     {
	I0725 18:20:49.482449   42258 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0725 18:20:49.482459   42258 command_runner.go:130] >       "repoTags": [
	I0725 18:20:49.482469   42258 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0725 18:20:49.482474   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482482   42258 command_runner.go:130] >       "repoDigests": [
	I0725 18:20:49.482491   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0725 18:20:49.482504   42258 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0725 18:20:49.482512   42258 command_runner.go:130] >       ],
	I0725 18:20:49.482523   42258 command_runner.go:130] >       "size": "750414",
	I0725 18:20:49.482531   42258 command_runner.go:130] >       "uid": {
	I0725 18:20:49.482537   42258 command_runner.go:130] >         "value": "65535"
	I0725 18:20:49.482543   42258 command_runner.go:130] >       },
	I0725 18:20:49.482548   42258 command_runner.go:130] >       "username": "",
	I0725 18:20:49.482555   42258 command_runner.go:130] >       "spec": null,
	I0725 18:20:49.482560   42258 command_runner.go:130] >       "pinned": true
	I0725 18:20:49.482569   42258 command_runner.go:130] >     }
	I0725 18:20:49.482576   42258 command_runner.go:130] >   ]
	I0725 18:20:49.482580   42258 command_runner.go:130] > }
	I0725 18:20:49.482857   42258 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:20:49.482875   42258 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:20:49.482884   42258 kubeadm.go:934] updating node { 192.168.39.54 8443 v1.30.3 crio true true} ...
	I0725 18:20:49.482981   42258 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-253131 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:20:49.483042   42258 ssh_runner.go:195] Run: crio config
	I0725 18:20:49.526034   42258 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0725 18:20:49.526071   42258 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0725 18:20:49.526084   42258 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0725 18:20:49.526089   42258 command_runner.go:130] > #
	I0725 18:20:49.526100   42258 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0725 18:20:49.526107   42258 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0725 18:20:49.526113   42258 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0725 18:20:49.526120   42258 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0725 18:20:49.526129   42258 command_runner.go:130] > # reload'.
	I0725 18:20:49.526135   42258 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0725 18:20:49.526141   42258 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0725 18:20:49.526147   42258 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0725 18:20:49.526155   42258 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0725 18:20:49.526160   42258 command_runner.go:130] > [crio]
	I0725 18:20:49.526170   42258 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0725 18:20:49.526181   42258 command_runner.go:130] > # containers images, in this directory.
	I0725 18:20:49.526188   42258 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0725 18:20:49.526204   42258 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0725 18:20:49.526213   42258 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0725 18:20:49.526223   42258 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0725 18:20:49.526233   42258 command_runner.go:130] > # imagestore = ""
	I0725 18:20:49.526241   42258 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0725 18:20:49.526252   42258 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0725 18:20:49.526260   42258 command_runner.go:130] > storage_driver = "overlay"
	I0725 18:20:49.526272   42258 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0725 18:20:49.526281   42258 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0725 18:20:49.526290   42258 command_runner.go:130] > storage_option = [
	I0725 18:20:49.526298   42258 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0725 18:20:49.526306   42258 command_runner.go:130] > ]
	I0725 18:20:49.526316   42258 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0725 18:20:49.526325   42258 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0725 18:20:49.526335   42258 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0725 18:20:49.526347   42258 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0725 18:20:49.526359   42258 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0725 18:20:49.526366   42258 command_runner.go:130] > # always happen on a node reboot
	I0725 18:20:49.526371   42258 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0725 18:20:49.526380   42258 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0725 18:20:49.526388   42258 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0725 18:20:49.526393   42258 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0725 18:20:49.526399   42258 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0725 18:20:49.526406   42258 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0725 18:20:49.526422   42258 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0725 18:20:49.526430   42258 command_runner.go:130] > # internal_wipe = true
	I0725 18:20:49.526443   42258 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0725 18:20:49.526453   42258 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0725 18:20:49.526460   42258 command_runner.go:130] > # internal_repair = false
	I0725 18:20:49.526472   42258 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0725 18:20:49.526484   42258 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0725 18:20:49.526495   42258 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0725 18:20:49.526506   42258 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0725 18:20:49.526515   42258 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0725 18:20:49.526530   42258 command_runner.go:130] > [crio.api]
	I0725 18:20:49.526538   42258 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0725 18:20:49.526548   42258 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0725 18:20:49.526556   42258 command_runner.go:130] > # IP address on which the stream server will listen.
	I0725 18:20:49.526566   42258 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0725 18:20:49.526577   42258 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0725 18:20:49.526588   42258 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0725 18:20:49.526595   42258 command_runner.go:130] > # stream_port = "0"
	I0725 18:20:49.526605   42258 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0725 18:20:49.526614   42258 command_runner.go:130] > # stream_enable_tls = false
	I0725 18:20:49.526624   42258 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0725 18:20:49.526633   42258 command_runner.go:130] > # stream_idle_timeout = ""
	I0725 18:20:49.526642   42258 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0725 18:20:49.526654   42258 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0725 18:20:49.526663   42258 command_runner.go:130] > # minutes.
	I0725 18:20:49.526673   42258 command_runner.go:130] > # stream_tls_cert = ""
	I0725 18:20:49.526686   42258 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0725 18:20:49.526699   42258 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0725 18:20:49.526709   42258 command_runner.go:130] > # stream_tls_key = ""
	I0725 18:20:49.526719   42258 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0725 18:20:49.526731   42258 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0725 18:20:49.526750   42258 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0725 18:20:49.526759   42258 command_runner.go:130] > # stream_tls_ca = ""
	I0725 18:20:49.526770   42258 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0725 18:20:49.526779   42258 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0725 18:20:49.526791   42258 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0725 18:20:49.526801   42258 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0725 18:20:49.526814   42258 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0725 18:20:49.526826   42258 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0725 18:20:49.526832   42258 command_runner.go:130] > [crio.runtime]
	I0725 18:20:49.526840   42258 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0725 18:20:49.526852   42258 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0725 18:20:49.526861   42258 command_runner.go:130] > # "nofile=1024:2048"
	I0725 18:20:49.526870   42258 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0725 18:20:49.526880   42258 command_runner.go:130] > # default_ulimits = [
	I0725 18:20:49.526885   42258 command_runner.go:130] > # ]
	I0725 18:20:49.526897   42258 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0725 18:20:49.526906   42258 command_runner.go:130] > # no_pivot = false
	I0725 18:20:49.526915   42258 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0725 18:20:49.526928   42258 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0725 18:20:49.526939   42258 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0725 18:20:49.526951   42258 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0725 18:20:49.526961   42258 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0725 18:20:49.526975   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0725 18:20:49.526985   42258 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0725 18:20:49.526993   42258 command_runner.go:130] > # Cgroup setting for conmon
	I0725 18:20:49.527006   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0725 18:20:49.527012   42258 command_runner.go:130] > conmon_cgroup = "pod"
	I0725 18:20:49.527023   42258 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0725 18:20:49.527034   42258 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0725 18:20:49.527050   42258 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0725 18:20:49.527058   42258 command_runner.go:130] > conmon_env = [
	I0725 18:20:49.527068   42258 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0725 18:20:49.527076   42258 command_runner.go:130] > ]
	I0725 18:20:49.527087   42258 command_runner.go:130] > # Additional environment variables to set for all the
	I0725 18:20:49.527099   42258 command_runner.go:130] > # containers. These are overridden if set in the
	I0725 18:20:49.527108   42258 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0725 18:20:49.527117   42258 command_runner.go:130] > # default_env = [
	I0725 18:20:49.527121   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527126   42258 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0725 18:20:49.527136   42258 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0725 18:20:49.527142   42258 command_runner.go:130] > # selinux = false
	I0725 18:20:49.527152   42258 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0725 18:20:49.527164   42258 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0725 18:20:49.527176   42258 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0725 18:20:49.527186   42258 command_runner.go:130] > # seccomp_profile = ""
	I0725 18:20:49.527195   42258 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0725 18:20:49.527206   42258 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0725 18:20:49.527217   42258 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0725 18:20:49.527226   42258 command_runner.go:130] > # which might increase security.
	I0725 18:20:49.527233   42258 command_runner.go:130] > # This option is currently deprecated,
	I0725 18:20:49.527245   42258 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0725 18:20:49.527256   42258 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0725 18:20:49.527270   42258 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0725 18:20:49.527283   42258 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0725 18:20:49.527293   42258 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0725 18:20:49.527305   42258 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0725 18:20:49.527313   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.527323   42258 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0725 18:20:49.527333   42258 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0725 18:20:49.527343   42258 command_runner.go:130] > # the cgroup blockio controller.
	I0725 18:20:49.527350   42258 command_runner.go:130] > # blockio_config_file = ""
	I0725 18:20:49.527363   42258 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0725 18:20:49.527373   42258 command_runner.go:130] > # blockio parameters.
	I0725 18:20:49.527380   42258 command_runner.go:130] > # blockio_reload = false
	I0725 18:20:49.527392   42258 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0725 18:20:49.527402   42258 command_runner.go:130] > # irqbalance daemon.
	I0725 18:20:49.527413   42258 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0725 18:20:49.527425   42258 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0725 18:20:49.527439   42258 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0725 18:20:49.527454   42258 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0725 18:20:49.527467   42258 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0725 18:20:49.527478   42258 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0725 18:20:49.527489   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.527497   42258 command_runner.go:130] > # rdt_config_file = ""
	I0725 18:20:49.527506   42258 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0725 18:20:49.527516   42258 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0725 18:20:49.527580   42258 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0725 18:20:49.527596   42258 command_runner.go:130] > # separate_pull_cgroup = ""
	I0725 18:20:49.527605   42258 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0725 18:20:49.527615   42258 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0725 18:20:49.527624   42258 command_runner.go:130] > # will be added.
	I0725 18:20:49.527632   42258 command_runner.go:130] > # default_capabilities = [
	I0725 18:20:49.527640   42258 command_runner.go:130] > # 	"CHOWN",
	I0725 18:20:49.527646   42258 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0725 18:20:49.527655   42258 command_runner.go:130] > # 	"FSETID",
	I0725 18:20:49.527661   42258 command_runner.go:130] > # 	"FOWNER",
	I0725 18:20:49.527670   42258 command_runner.go:130] > # 	"SETGID",
	I0725 18:20:49.527676   42258 command_runner.go:130] > # 	"SETUID",
	I0725 18:20:49.527685   42258 command_runner.go:130] > # 	"SETPCAP",
	I0725 18:20:49.527691   42258 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0725 18:20:49.527700   42258 command_runner.go:130] > # 	"KILL",
	I0725 18:20:49.527705   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527720   42258 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0725 18:20:49.527733   42258 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0725 18:20:49.527743   42258 command_runner.go:130] > # add_inheritable_capabilities = false
	I0725 18:20:49.527756   42258 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0725 18:20:49.527766   42258 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0725 18:20:49.527775   42258 command_runner.go:130] > default_sysctls = [
	I0725 18:20:49.527783   42258 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0725 18:20:49.527791   42258 command_runner.go:130] > ]
	I0725 18:20:49.527798   42258 command_runner.go:130] > # List of devices on the host that a
	I0725 18:20:49.527811   42258 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0725 18:20:49.527817   42258 command_runner.go:130] > # allowed_devices = [
	I0725 18:20:49.527826   42258 command_runner.go:130] > # 	"/dev/fuse",
	I0725 18:20:49.527831   42258 command_runner.go:130] > # ]
	I0725 18:20:49.527984   42258 command_runner.go:130] > # List of additional devices. specified as
	I0725 18:20:49.528008   42258 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0725 18:20:49.528023   42258 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0725 18:20:49.528036   42258 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0725 18:20:49.528047   42258 command_runner.go:130] > # additional_devices = [
	I0725 18:20:49.528056   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528111   42258 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0725 18:20:49.528126   42258 command_runner.go:130] > # cdi_spec_dirs = [
	I0725 18:20:49.528130   42258 command_runner.go:130] > # 	"/etc/cdi",
	I0725 18:20:49.528145   42258 command_runner.go:130] > # 	"/var/run/cdi",
	I0725 18:20:49.528155   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528167   42258 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0725 18:20:49.528182   42258 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0725 18:20:49.528191   42258 command_runner.go:130] > # Defaults to false.
	I0725 18:20:49.528199   42258 command_runner.go:130] > # device_ownership_from_security_context = false
	I0725 18:20:49.528212   42258 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0725 18:20:49.528223   42258 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0725 18:20:49.528232   42258 command_runner.go:130] > # hooks_dir = [
	I0725 18:20:49.528241   42258 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0725 18:20:49.528249   42258 command_runner.go:130] > # ]
	I0725 18:20:49.528264   42258 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0725 18:20:49.528278   42258 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0725 18:20:49.528291   42258 command_runner.go:130] > # its default mounts from the following two files:
	I0725 18:20:49.528298   42258 command_runner.go:130] > #
	I0725 18:20:49.528308   42258 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0725 18:20:49.528334   42258 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0725 18:20:49.528347   42258 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0725 18:20:49.528352   42258 command_runner.go:130] > #
	I0725 18:20:49.528364   42258 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0725 18:20:49.528377   42258 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0725 18:20:49.528390   42258 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0725 18:20:49.528400   42258 command_runner.go:130] > #      only add mounts it finds in this file.
	I0725 18:20:49.528408   42258 command_runner.go:130] > #
	I0725 18:20:49.528415   42258 command_runner.go:130] > # default_mounts_file = ""
	I0725 18:20:49.528427   42258 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0725 18:20:49.528440   42258 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0725 18:20:49.528451   42258 command_runner.go:130] > pids_limit = 1024
	I0725 18:20:49.528463   42258 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0725 18:20:49.528476   42258 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0725 18:20:49.528485   42258 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0725 18:20:49.528499   42258 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0725 18:20:49.528507   42258 command_runner.go:130] > # log_size_max = -1
	I0725 18:20:49.528517   42258 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0725 18:20:49.528523   42258 command_runner.go:130] > # log_to_journald = false
	I0725 18:20:49.528537   42258 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0725 18:20:49.528547   42258 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0725 18:20:49.528556   42258 command_runner.go:130] > # Path to directory for container attach sockets.
	I0725 18:20:49.528570   42258 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0725 18:20:49.528581   42258 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0725 18:20:49.528591   42258 command_runner.go:130] > # bind_mount_prefix = ""
	I0725 18:20:49.528602   42258 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0725 18:20:49.528611   42258 command_runner.go:130] > # read_only = false
	I0725 18:20:49.528620   42258 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0725 18:20:49.528628   42258 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0725 18:20:49.528635   42258 command_runner.go:130] > # live configuration reload.
	I0725 18:20:49.528639   42258 command_runner.go:130] > # log_level = "info"
	I0725 18:20:49.528645   42258 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0725 18:20:49.528652   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.528657   42258 command_runner.go:130] > # log_filter = ""
	I0725 18:20:49.528665   42258 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0725 18:20:49.528673   42258 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0725 18:20:49.528677   42258 command_runner.go:130] > # separated by comma.
	I0725 18:20:49.528685   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528691   42258 command_runner.go:130] > # uid_mappings = ""
	I0725 18:20:49.528697   42258 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0725 18:20:49.528705   42258 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0725 18:20:49.528709   42258 command_runner.go:130] > # separated by comma.
	I0725 18:20:49.528716   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528722   42258 command_runner.go:130] > # gid_mappings = ""
	I0725 18:20:49.528728   42258 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0725 18:20:49.528736   42258 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0725 18:20:49.528744   42258 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0725 18:20:49.528759   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528766   42258 command_runner.go:130] > # minimum_mappable_uid = -1
	I0725 18:20:49.528772   42258 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0725 18:20:49.528779   42258 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0725 18:20:49.528786   42258 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0725 18:20:49.528795   42258 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0725 18:20:49.528801   42258 command_runner.go:130] > # minimum_mappable_gid = -1
	I0725 18:20:49.528806   42258 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0725 18:20:49.528814   42258 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0725 18:20:49.528821   42258 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0725 18:20:49.528832   42258 command_runner.go:130] > # ctr_stop_timeout = 30
	I0725 18:20:49.528840   42258 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0725 18:20:49.528845   42258 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0725 18:20:49.528852   42258 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0725 18:20:49.528857   42258 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0725 18:20:49.528863   42258 command_runner.go:130] > drop_infra_ctr = false
	I0725 18:20:49.528869   42258 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0725 18:20:49.528876   42258 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0725 18:20:49.528885   42258 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0725 18:20:49.528891   42258 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0725 18:20:49.528898   42258 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0725 18:20:49.528905   42258 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0725 18:20:49.528913   42258 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0725 18:20:49.528920   42258 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0725 18:20:49.528924   42258 command_runner.go:130] > # shared_cpuset = ""
	I0725 18:20:49.528931   42258 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0725 18:20:49.528936   42258 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0725 18:20:49.528943   42258 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0725 18:20:49.528950   42258 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0725 18:20:49.528956   42258 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0725 18:20:49.528961   42258 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0725 18:20:49.528969   42258 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0725 18:20:49.528975   42258 command_runner.go:130] > # enable_criu_support = false
	I0725 18:20:49.528981   42258 command_runner.go:130] > # Enable/disable the generation of the container,
	I0725 18:20:49.528988   42258 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0725 18:20:49.528993   42258 command_runner.go:130] > # enable_pod_events = false
	I0725 18:20:49.528999   42258 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0725 18:20:49.529007   42258 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0725 18:20:49.529012   42258 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0725 18:20:49.529018   42258 command_runner.go:130] > # default_runtime = "runc"
	I0725 18:20:49.529023   42258 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0725 18:20:49.529032   42258 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0725 18:20:49.529042   42258 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0725 18:20:49.529052   42258 command_runner.go:130] > # creation as a file is not desired either.
	I0725 18:20:49.529062   42258 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0725 18:20:49.529069   42258 command_runner.go:130] > # the hostname is being managed dynamically.
	I0725 18:20:49.529073   42258 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0725 18:20:49.529079   42258 command_runner.go:130] > # ]
	I0725 18:20:49.529085   42258 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0725 18:20:49.529094   42258 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0725 18:20:49.529099   42258 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0725 18:20:49.529104   42258 command_runner.go:130] > # Each entry in the table should follow the format:
	I0725 18:20:49.529110   42258 command_runner.go:130] > #
	I0725 18:20:49.529114   42258 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0725 18:20:49.529119   42258 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0725 18:20:49.529140   42258 command_runner.go:130] > # runtime_type = "oci"
	I0725 18:20:49.529146   42258 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0725 18:20:49.529151   42258 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0725 18:20:49.529157   42258 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0725 18:20:49.529162   42258 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0725 18:20:49.529168   42258 command_runner.go:130] > # monitor_env = []
	I0725 18:20:49.529172   42258 command_runner.go:130] > # privileged_without_host_devices = false
	I0725 18:20:49.529178   42258 command_runner.go:130] > # allowed_annotations = []
	I0725 18:20:49.529183   42258 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0725 18:20:49.529189   42258 command_runner.go:130] > # Where:
	I0725 18:20:49.529194   42258 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0725 18:20:49.529202   42258 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0725 18:20:49.529208   42258 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0725 18:20:49.529216   42258 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0725 18:20:49.529221   42258 command_runner.go:130] > #   in $PATH.
	I0725 18:20:49.529227   42258 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0725 18:20:49.529234   42258 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0725 18:20:49.529240   42258 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0725 18:20:49.529246   42258 command_runner.go:130] > #   state.
	I0725 18:20:49.529251   42258 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0725 18:20:49.529277   42258 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0725 18:20:49.529285   42258 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0725 18:20:49.529290   42258 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0725 18:20:49.529298   42258 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0725 18:20:49.529305   42258 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0725 18:20:49.529311   42258 command_runner.go:130] > #   The currently recognized values are:
	I0725 18:20:49.529321   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0725 18:20:49.529330   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0725 18:20:49.529338   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0725 18:20:49.529346   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0725 18:20:49.529353   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0725 18:20:49.529362   42258 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0725 18:20:49.529370   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0725 18:20:49.529376   42258 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0725 18:20:49.529384   42258 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0725 18:20:49.529392   42258 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0725 18:20:49.529399   42258 command_runner.go:130] > #   deprecated option "conmon".
	I0725 18:20:49.529405   42258 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0725 18:20:49.529412   42258 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0725 18:20:49.529419   42258 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0725 18:20:49.529425   42258 command_runner.go:130] > #   should be moved to the container's cgroup
	I0725 18:20:49.529432   42258 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0725 18:20:49.529438   42258 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0725 18:20:49.529444   42258 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0725 18:20:49.529451   42258 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0725 18:20:49.529454   42258 command_runner.go:130] > #
	I0725 18:20:49.529459   42258 command_runner.go:130] > # Using the seccomp notifier feature:
	I0725 18:20:49.529463   42258 command_runner.go:130] > #
	I0725 18:20:49.529469   42258 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0725 18:20:49.529487   42258 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0725 18:20:49.529493   42258 command_runner.go:130] > #
	I0725 18:20:49.529499   42258 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0725 18:20:49.529509   42258 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0725 18:20:49.529515   42258 command_runner.go:130] > #
	I0725 18:20:49.529521   42258 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0725 18:20:49.529525   42258 command_runner.go:130] > # feature.
	I0725 18:20:49.529528   42258 command_runner.go:130] > #
	I0725 18:20:49.529536   42258 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0725 18:20:49.529544   42258 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0725 18:20:49.529550   42258 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0725 18:20:49.529559   42258 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0725 18:20:49.529567   42258 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0725 18:20:49.529572   42258 command_runner.go:130] > #
	I0725 18:20:49.529578   42258 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0725 18:20:49.529586   42258 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0725 18:20:49.529591   42258 command_runner.go:130] > #
	I0725 18:20:49.529596   42258 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0725 18:20:49.529604   42258 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0725 18:20:49.529608   42258 command_runner.go:130] > #
	I0725 18:20:49.529614   42258 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0725 18:20:49.529622   42258 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0725 18:20:49.529627   42258 command_runner.go:130] > # limitation.
	I0725 18:20:49.529631   42258 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0725 18:20:49.529635   42258 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0725 18:20:49.529641   42258 command_runner.go:130] > runtime_type = "oci"
	I0725 18:20:49.529645   42258 command_runner.go:130] > runtime_root = "/run/runc"
	I0725 18:20:49.529650   42258 command_runner.go:130] > runtime_config_path = ""
	I0725 18:20:49.529655   42258 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0725 18:20:49.529661   42258 command_runner.go:130] > monitor_cgroup = "pod"
	I0725 18:20:49.529665   42258 command_runner.go:130] > monitor_exec_cgroup = ""
	I0725 18:20:49.529671   42258 command_runner.go:130] > monitor_env = [
	I0725 18:20:49.529676   42258 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0725 18:20:49.529682   42258 command_runner.go:130] > ]
	I0725 18:20:49.529687   42258 command_runner.go:130] > privileged_without_host_devices = false
	I0725 18:20:49.529695   42258 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0725 18:20:49.529702   42258 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0725 18:20:49.529707   42258 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0725 18:20:49.529716   42258 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0725 18:20:49.529724   42258 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0725 18:20:49.529732   42258 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0725 18:20:49.529743   42258 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0725 18:20:49.529752   42258 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0725 18:20:49.529759   42258 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0725 18:20:49.529768   42258 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0725 18:20:49.529773   42258 command_runner.go:130] > # Example:
	I0725 18:20:49.529777   42258 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0725 18:20:49.529781   42258 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0725 18:20:49.529786   42258 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0725 18:20:49.529790   42258 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0725 18:20:49.529794   42258 command_runner.go:130] > # cpuset = 0
	I0725 18:20:49.529797   42258 command_runner.go:130] > # cpushares = "0-1"
	I0725 18:20:49.529800   42258 command_runner.go:130] > # Where:
	I0725 18:20:49.529804   42258 command_runner.go:130] > # The workload name is workload-type.
	I0725 18:20:49.529810   42258 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0725 18:20:49.529815   42258 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0725 18:20:49.529820   42258 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0725 18:20:49.529826   42258 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0725 18:20:49.529831   42258 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0725 18:20:49.529836   42258 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0725 18:20:49.529842   42258 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0725 18:20:49.529845   42258 command_runner.go:130] > # Default value is set to true
	I0725 18:20:49.529849   42258 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0725 18:20:49.529854   42258 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0725 18:20:49.529859   42258 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0725 18:20:49.529863   42258 command_runner.go:130] > # Default value is set to 'false'
	I0725 18:20:49.529866   42258 command_runner.go:130] > # disable_hostport_mapping = false
	I0725 18:20:49.529872   42258 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0725 18:20:49.529874   42258 command_runner.go:130] > #
	I0725 18:20:49.529879   42258 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0725 18:20:49.529888   42258 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0725 18:20:49.529893   42258 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0725 18:20:49.529898   42258 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0725 18:20:49.529903   42258 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0725 18:20:49.529907   42258 command_runner.go:130] > [crio.image]
	I0725 18:20:49.529912   42258 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0725 18:20:49.529917   42258 command_runner.go:130] > # default_transport = "docker://"
	I0725 18:20:49.529922   42258 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0725 18:20:49.529928   42258 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0725 18:20:49.529932   42258 command_runner.go:130] > # global_auth_file = ""
	I0725 18:20:49.529937   42258 command_runner.go:130] > # The image used to instantiate infra containers.
	I0725 18:20:49.529941   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.529946   42258 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0725 18:20:49.529951   42258 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0725 18:20:49.529959   42258 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0725 18:20:49.529965   42258 command_runner.go:130] > # This option supports live configuration reload.
	I0725 18:20:49.529974   42258 command_runner.go:130] > # pause_image_auth_file = ""
	I0725 18:20:49.529981   42258 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0725 18:20:49.529987   42258 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0725 18:20:49.529993   42258 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0725 18:20:49.529999   42258 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0725 18:20:49.530005   42258 command_runner.go:130] > # pause_command = "/pause"
	I0725 18:20:49.530010   42258 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0725 18:20:49.530018   42258 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0725 18:20:49.530024   42258 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0725 18:20:49.530032   42258 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0725 18:20:49.530038   42258 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0725 18:20:49.530043   42258 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0725 18:20:49.530050   42258 command_runner.go:130] > # pinned_images = [
	I0725 18:20:49.530053   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530058   42258 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0725 18:20:49.530065   42258 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0725 18:20:49.530071   42258 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0725 18:20:49.530078   42258 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0725 18:20:49.530083   42258 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0725 18:20:49.530089   42258 command_runner.go:130] > # signature_policy = ""
	I0725 18:20:49.530094   42258 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0725 18:20:49.530102   42258 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0725 18:20:49.530108   42258 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0725 18:20:49.530117   42258 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0725 18:20:49.530122   42258 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0725 18:20:49.530127   42258 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0725 18:20:49.530135   42258 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0725 18:20:49.530141   42258 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0725 18:20:49.530147   42258 command_runner.go:130] > # changing them here.
	I0725 18:20:49.530159   42258 command_runner.go:130] > # insecure_registries = [
	I0725 18:20:49.530166   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530173   42258 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0725 18:20:49.530181   42258 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0725 18:20:49.530187   42258 command_runner.go:130] > # image_volumes = "mkdir"
	I0725 18:20:49.530195   42258 command_runner.go:130] > # Temporary directory to use for storing big files
	I0725 18:20:49.530204   42258 command_runner.go:130] > # big_files_temporary_dir = ""
	I0725 18:20:49.530212   42258 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0725 18:20:49.530221   42258 command_runner.go:130] > # CNI plugins.
	I0725 18:20:49.530225   42258 command_runner.go:130] > [crio.network]
	I0725 18:20:49.530230   42258 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0725 18:20:49.530236   42258 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0725 18:20:49.530240   42258 command_runner.go:130] > # cni_default_network = ""
	I0725 18:20:49.530246   42258 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0725 18:20:49.530253   42258 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0725 18:20:49.530262   42258 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0725 18:20:49.530268   42258 command_runner.go:130] > # plugin_dirs = [
	I0725 18:20:49.530273   42258 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0725 18:20:49.530276   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530281   42258 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0725 18:20:49.530285   42258 command_runner.go:130] > [crio.metrics]
	I0725 18:20:49.530290   42258 command_runner.go:130] > # Globally enable or disable metrics support.
	I0725 18:20:49.530296   42258 command_runner.go:130] > enable_metrics = true
	I0725 18:20:49.530303   42258 command_runner.go:130] > # Specify enabled metrics collectors.
	I0725 18:20:49.530313   42258 command_runner.go:130] > # Per default all metrics are enabled.
	I0725 18:20:49.530323   42258 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0725 18:20:49.530334   42258 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0725 18:20:49.530344   42258 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0725 18:20:49.530351   42258 command_runner.go:130] > # metrics_collectors = [
	I0725 18:20:49.530359   42258 command_runner.go:130] > # 	"operations",
	I0725 18:20:49.530367   42258 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0725 18:20:49.530377   42258 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0725 18:20:49.530387   42258 command_runner.go:130] > # 	"operations_errors",
	I0725 18:20:49.530397   42258 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0725 18:20:49.530403   42258 command_runner.go:130] > # 	"image_pulls_by_name",
	I0725 18:20:49.530407   42258 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0725 18:20:49.530413   42258 command_runner.go:130] > # 	"image_pulls_failures",
	I0725 18:20:49.530417   42258 command_runner.go:130] > # 	"image_pulls_successes",
	I0725 18:20:49.530423   42258 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0725 18:20:49.530427   42258 command_runner.go:130] > # 	"image_layer_reuse",
	I0725 18:20:49.530434   42258 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0725 18:20:49.530438   42258 command_runner.go:130] > # 	"containers_oom_total",
	I0725 18:20:49.530444   42258 command_runner.go:130] > # 	"containers_oom",
	I0725 18:20:49.530448   42258 command_runner.go:130] > # 	"processes_defunct",
	I0725 18:20:49.530454   42258 command_runner.go:130] > # 	"operations_total",
	I0725 18:20:49.530458   42258 command_runner.go:130] > # 	"operations_latency_seconds",
	I0725 18:20:49.530465   42258 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0725 18:20:49.530469   42258 command_runner.go:130] > # 	"operations_errors_total",
	I0725 18:20:49.530475   42258 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0725 18:20:49.530480   42258 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0725 18:20:49.530486   42258 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0725 18:20:49.530491   42258 command_runner.go:130] > # 	"image_pulls_success_total",
	I0725 18:20:49.530498   42258 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0725 18:20:49.530502   42258 command_runner.go:130] > # 	"containers_oom_count_total",
	I0725 18:20:49.530510   42258 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0725 18:20:49.530514   42258 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0725 18:20:49.530518   42258 command_runner.go:130] > # ]
	I0725 18:20:49.530525   42258 command_runner.go:130] > # The port on which the metrics server will listen.
	I0725 18:20:49.530529   42258 command_runner.go:130] > # metrics_port = 9090
	I0725 18:20:49.530536   42258 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0725 18:20:49.530540   42258 command_runner.go:130] > # metrics_socket = ""
	I0725 18:20:49.530546   42258 command_runner.go:130] > # The certificate for the secure metrics server.
	I0725 18:20:49.530554   42258 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0725 18:20:49.530562   42258 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0725 18:20:49.530568   42258 command_runner.go:130] > # certificate on any modification event.
	I0725 18:20:49.530572   42258 command_runner.go:130] > # metrics_cert = ""
	I0725 18:20:49.530578   42258 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0725 18:20:49.530583   42258 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0725 18:20:49.530589   42258 command_runner.go:130] > # metrics_key = ""
	I0725 18:20:49.530595   42258 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0725 18:20:49.530601   42258 command_runner.go:130] > [crio.tracing]
	I0725 18:20:49.530605   42258 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0725 18:20:49.530609   42258 command_runner.go:130] > # enable_tracing = false
	I0725 18:20:49.530615   42258 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0725 18:20:49.530622   42258 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0725 18:20:49.530628   42258 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0725 18:20:49.530635   42258 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0725 18:20:49.530639   42258 command_runner.go:130] > # CRI-O NRI configuration.
	I0725 18:20:49.530643   42258 command_runner.go:130] > [crio.nri]
	I0725 18:20:49.530647   42258 command_runner.go:130] > # Globally enable or disable NRI.
	I0725 18:20:49.530653   42258 command_runner.go:130] > # enable_nri = false
	I0725 18:20:49.530657   42258 command_runner.go:130] > # NRI socket to listen on.
	I0725 18:20:49.530663   42258 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0725 18:20:49.530668   42258 command_runner.go:130] > # NRI plugin directory to use.
	I0725 18:20:49.530674   42258 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0725 18:20:49.530679   42258 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0725 18:20:49.530686   42258 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0725 18:20:49.530691   42258 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0725 18:20:49.530696   42258 command_runner.go:130] > # nri_disable_connections = false
	I0725 18:20:49.530701   42258 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0725 18:20:49.530707   42258 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0725 18:20:49.530712   42258 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0725 18:20:49.530718   42258 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0725 18:20:49.530724   42258 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0725 18:20:49.530730   42258 command_runner.go:130] > [crio.stats]
	I0725 18:20:49.530735   42258 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0725 18:20:49.530742   42258 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0725 18:20:49.530746   42258 command_runner.go:130] > # stats_collection_period = 0
	I0725 18:20:49.530770   42258 command_runner.go:130] ! time="2024-07-25 18:20:49.491406837Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0725 18:20:49.530787   42258 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0725 18:20:49.530889   42258 cni.go:84] Creating CNI manager for ""
	I0725 18:20:49.530899   42258 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0725 18:20:49.530907   42258 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:20:49.530925   42258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.54 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-253131 NodeName:multinode-253131 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:20:49.531048   42258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-253131"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:20:49.531109   42258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:20:49.540359   42258 command_runner.go:130] > kubeadm
	I0725 18:20:49.540380   42258 command_runner.go:130] > kubectl
	I0725 18:20:49.540384   42258 command_runner.go:130] > kubelet
	I0725 18:20:49.540404   42258 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:20:49.540456   42258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:20:49.549110   42258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0725 18:20:49.564714   42258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:20:49.580521   42258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0725 18:20:49.595696   42258 ssh_runner.go:195] Run: grep 192.168.39.54	control-plane.minikube.internal$ /etc/hosts
	I0725 18:20:49.599030   42258 command_runner.go:130] > 192.168.39.54	control-plane.minikube.internal
	I0725 18:20:49.599192   42258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:20:49.733120   42258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:20:49.747279   42258 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131 for IP: 192.168.39.54
	I0725 18:20:49.747305   42258 certs.go:194] generating shared ca certs ...
	I0725 18:20:49.747325   42258 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:20:49.747512   42258 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:20:49.747567   42258 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:20:49.747579   42258 certs.go:256] generating profile certs ...
	I0725 18:20:49.747672   42258 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/client.key
	I0725 18:20:49.747751   42258 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key.64a64755
	I0725 18:20:49.747797   42258 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key
	I0725 18:20:49.747808   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 18:20:49.747820   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 18:20:49.747832   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 18:20:49.747845   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 18:20:49.747858   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 18:20:49.747871   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 18:20:49.747884   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 18:20:49.747896   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 18:20:49.747942   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:20:49.747970   42258 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:20:49.747976   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:20:49.747996   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:20:49.748013   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:20:49.748032   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:20:49.748068   42258 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:20:49.748101   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:49.748119   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 18:20:49.748136   42258 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 18:20:49.748710   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:20:49.771868   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:20:49.793159   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:20:49.815885   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:20:49.838054   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:20:49.860795   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:20:49.883190   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:20:49.904730   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/multinode-253131/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:20:49.926794   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:20:49.948848   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:20:49.971805   42258 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:20:49.993511   42258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:20:50.008446   42258 ssh_runner.go:195] Run: openssl version
	I0725 18:20:50.013733   42258 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0725 18:20:50.013809   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:20:50.024202   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028186   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028284   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.028348   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:20:50.033433   42258 command_runner.go:130] > 3ec20f2e
	I0725 18:20:50.033643   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:20:50.042362   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:20:50.052085   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.055988   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.056013   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.056041   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:20:50.061249   42258 command_runner.go:130] > b5213941
	I0725 18:20:50.061299   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:20:50.069947   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:20:50.083490   42258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102729   42258 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102762   42258 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.102810   42258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:20:50.115563   42258 command_runner.go:130] > 51391683
	I0725 18:20:50.115630   42258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:20:50.163470   42258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:20:50.179418   42258 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:20:50.179443   42258 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0725 18:20:50.179452   42258 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0725 18:20:50.179458   42258 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0725 18:20:50.179465   42258 command_runner.go:130] > Access: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179470   42258 command_runner.go:130] > Modify: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179475   42258 command_runner.go:130] > Change: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179479   42258 command_runner.go:130] >  Birth: 2024-07-25 18:14:03.452350324 +0000
	I0725 18:20:50.179570   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:20:50.186673   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.186740   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:20:50.193572   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.193709   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:20:50.200948   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.201200   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:20:50.207041   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.208275   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:20:50.217642   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.217724   42258 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:20:50.227637   42258 command_runner.go:130] > Certificate will not expire
	I0725 18:20:50.227703   42258 kubeadm.go:392] StartCluster: {Name:multinode-253131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-253131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:20:50.227835   42258 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:20:50.227879   42258 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:20:50.272974   42258 command_runner.go:130] > 92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf
	I0725 18:20:50.273005   42258 command_runner.go:130] > 74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314
	I0725 18:20:50.273014   42258 command_runner.go:130] > fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e
	I0725 18:20:50.273026   42258 command_runner.go:130] > 393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3
	I0725 18:20:50.273035   42258 command_runner.go:130] > 28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19
	I0725 18:20:50.273045   42258 command_runner.go:130] > 2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879
	I0725 18:20:50.273055   42258 command_runner.go:130] > 79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7
	I0725 18:20:50.273066   42258 command_runner.go:130] > a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601
	I0725 18:20:50.278353   42258 cri.go:89] found id: "92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf"
	I0725 18:20:50.278386   42258 cri.go:89] found id: "74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314"
	I0725 18:20:50.278392   42258 cri.go:89] found id: "fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e"
	I0725 18:20:50.278396   42258 cri.go:89] found id: "393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3"
	I0725 18:20:50.278400   42258 cri.go:89] found id: "28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19"
	I0725 18:20:50.278405   42258 cri.go:89] found id: "2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879"
	I0725 18:20:50.278409   42258 cri.go:89] found id: "79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7"
	I0725 18:20:50.278413   42258 cri.go:89] found id: "a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601"
	I0725 18:20:50.278416   42258 cri.go:89] found id: ""
	I0725 18:20:50.278467   42258 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.367940403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931899367862208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5acafd0e-d5a6-48d9-a3ad-d9344d11333b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.368433011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dd1ea96-20e4-4e11-8696-b0c537f2607b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.368491818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dd1ea96-20e4-4e11-8696-b0c537f2607b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.368854522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dd1ea96-20e4-4e11-8696-b0c537f2607b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.416280204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63024ffe-4199-4018-b61e-76c60efbea62 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.416402715Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63024ffe-4199-4018-b61e-76c60efbea62 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.418079584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd9017f4-0318-424b-9022-3cb9b7f05319 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.418488590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931899418467737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd9017f4-0318-424b-9022-3cb9b7f05319 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.419279817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c66c45a7-2299-4c2a-9cb9-373ca2d9833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.419528384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c66c45a7-2299-4c2a-9cb9-373ca2d9833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.420326093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c66c45a7-2299-4c2a-9cb9-373ca2d9833c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.461508042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27ad32f1-1927-4b26-b2ac-40ed59f42b72 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.461578347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27ad32f1-1927-4b26-b2ac-40ed59f42b72 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.462454597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bb70515-0c1d-4861-9589-215a7d3aaecd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.462930607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931899462827302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bb70515-0c1d-4861-9589-215a7d3aaecd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.463485346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f960d479-6cca-4380-8e75-f668b64419f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.463554076Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f960d479-6cca-4380-8e75-f668b64419f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.463944851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f960d479-6cca-4380-8e75-f668b64419f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.502381667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cba10921-0d14-47d4-b58c-e3afb4acf2b7 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.502466538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cba10921-0d14-47d4-b58c-e3afb4acf2b7 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.503718022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d92e03e3-d2e8-4225-8451-5c5e48af0e75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.504320658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721931899504293097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d92e03e3-d2e8-4225-8451-5c5e48af0e75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.505007915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=377b6fbd-ab73-4a87-a0ef-89b4c2bac329 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.505066701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=377b6fbd-ab73-4a87-a0ef-89b4c2bac329 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:24:59 multinode-253131 crio[2878]: time="2024-07-25 18:24:59.505410339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:682a55d67f4d2e97c9b5bbd4fac64a786da9ac2f902fb8413be1698b92504637,PodSandboxId:319e7aaaf7f9b3af07d4bb4a6ddc0fef65ff12c8730b3e2d060eecb47ffdf496,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721931690477505404,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59,PodSandboxId:0eec97385011b041c3294691946d1acf7ee38d54eb7d8ececaee5aa43f17508e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1721931656744956479,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175606982c3a8f8b49fbc0a9725c3d028ceb84a8b844fad4f595b912d38345bf,PodSandboxId:a8adf961fc6ba52a0c7416ee2384171e154f8807f4bb465fc103f7dbee23115a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721931656623773956,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2,PodSandboxId:33dc8c79f7918cd4a402bae8b49e3b3eae54153ebbfab2a841f71d9671f87e5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721931656617465486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0,PodSandboxId:bc1d6b028edd96c2bacfaa56712ff65a4b2f76659a8b461b6731c7aaca5d61f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721931656454667280,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942,PodSandboxId:7e22c7dcc33411d89fc797dbd7b33e27214e73f5917e85764f41f275d05d7658,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721931652855385441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3,PodSandboxId:914bb26efea4c681acbcf9318f911fb1dd731330adda949b717769683593d449,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721931652827062892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[string]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803,PodSandboxId:5caa427ed6f28db1d23631caaea08a84c458e65f778fc2399421cd0ababaee64,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721931652825607639,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90,PodSandboxId:6de88394a04d93e1585795b8db4095443d18071d4abcd9cad52d1eb641b421bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721931652775229033,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:167ef00cd9d3102c911c5423edd85799224cd96e8f3a170a7194908f5dec5045,PodSandboxId:62c1f7c61ac0bfe01bc9a49a746b9583afd30f7dd7079ae512f164022059732d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721931334165439984,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-gfbkg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 867fbc0d-ad43-47e9-9bb1-a83711108175,},Annotations:map[string]string{io.kubernetes.container.hash: 81220fa0,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf,PodSandboxId:460272c4df4757240f7b974f6c89ef2fb602cf733a8d946b855830ad8903d979,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721931281543844474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6lrr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76b677de-805b-44fc-930b-ee22b62f899d,},Annotations:map[string]string{io.kubernetes.container.hash: b5a0d3f2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74171fcbfdd95b9dfcecf3208be5299e2c582cdcceec4576c938c17458c35314,PodSandboxId:f21693e764cdc377626a91c1f6ca8e8e11b0a96e489d5566d8c172dceee21e44,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721931281460573186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 388889a4-653d-4351-b19e-454285b56dd5,},Annotations:map[string]string{io.kubernetes.container.hash: b92b062f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e,PodSandboxId:4229d7f04f2ccc1bc2e5c50e63f5084e9b04e41d277d0327f1133928ecefd661,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1721931269955087773,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hvwf2,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 2d1b1ec9-65be-45a4-bc80-f2f13f2349bc,},Annotations:map[string]string{io.kubernetes.container.hash: 8eeeb3ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3,PodSandboxId:edcbd05703aa08de6b4ff81b30dc40975d96556e08145968b95dbcc4b52f1d7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721931266197249766,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgrbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 5bd539cc-9683-496d-9aea-545539fcf647,},Annotations:map[string]string{io.kubernetes.container.hash: fb46d3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879,PodSandboxId:4498f66aec6451e6b0dda70a070c539c83878c0f3998cb8ddad8622b66e7f015,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721931246945985722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bcea2a9e180446d8917e2d2ad351278
,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7bd2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19,PodSandboxId:3530f05aeb7ba304d086f38a388c2e203a1f6309e7df72be8b319d416eb247f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721931246984322341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef591b27cac22f258a5124a896bba69c,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7,PodSandboxId:45ca4744c187c19a1ba96aaa96aa1c8788f772915ff4c149208d6f2488151cfc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721931246923473987,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6996255518f3fbc659525d5a07c6ba55,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601,PodSandboxId:a235fb6d24116b5e920d25660b9bef08c1f249f01d962193d49cbf70128cb95a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721931246884506018,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-253131,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a1c36aae09cd96c70543ba1d014e1b9,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 1a39b837,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=377b6fbd-ab73-4a87-a0ef-89b4c2bac329 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	682a55d67f4d2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   319e7aaaf7f9b       busybox-fc5497c4f-gfbkg
	061828a7da84f       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   0eec97385011b       kindnet-hvwf2
	175606982c3a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   a8adf961fc6ba       storage-provisioner
	9c30dbb647c58       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   33dc8c79f7918       kube-proxy-zgrbq
	31dceff06347a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   bc1d6b028edd9       coredns-7db6d8ff4d-6lrr5
	95f1dc59987d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   7e22c7dcc3341       etcd-multinode-253131
	268ab6f9cddbc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   914bb26efea4c       kube-apiserver-multinode-253131
	33f021a008e5e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   5caa427ed6f28       kube-scheduler-multinode-253131
	43b7d2bfc585b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   6de88394a04d9       kube-controller-manager-multinode-253131
	167ef00cd9d31       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   62c1f7c61ac0b       busybox-fc5497c4f-gfbkg
	92575c8e1c68f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   460272c4df475       coredns-7db6d8ff4d-6lrr5
	74171fcbfdd95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   f21693e764cdc       storage-provisioner
	fd663c148a619       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   4229d7f04f2cc       kindnet-hvwf2
	393e599ba9386       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   edcbd05703aa0       kube-proxy-zgrbq
	28021ff9ef2d5       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   3530f05aeb7ba       kube-scheduler-multinode-253131
	2c878462f2ec4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   4498f66aec645       etcd-multinode-253131
	79df99dc269c4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   45ca4744c187c       kube-controller-manager-multinode-253131
	a7e2ce3e3194e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   a235fb6d24116       kube-apiserver-multinode-253131
	
	
	==> coredns [31dceff06347a622de3709046ed5c6288223084c570922b66f3e21c4995037a0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50178 - 41374 "HINFO IN 4171186552993392796.211708472645225929. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012901345s
	
	
	==> coredns [92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf] <==
	[INFO] 10.244.0.3:53535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001924562s
	[INFO] 10.244.0.3:40356 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000041007s
	[INFO] 10.244.0.3:38393 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000026573s
	[INFO] 10.244.0.3:48020 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291991s
	[INFO] 10.244.0.3:43625 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000672s
	[INFO] 10.244.0.3:34247 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028344s
	[INFO] 10.244.0.3:49942 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000026315s
	[INFO] 10.244.1.2:57962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010389s
	[INFO] 10.244.1.2:35238 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072557s
	[INFO] 10.244.1.2:35287 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006109s
	[INFO] 10.244.1.2:58908 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057223s
	[INFO] 10.244.0.3:54929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113877s
	[INFO] 10.244.0.3:53283 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055106s
	[INFO] 10.244.0.3:47891 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000036347s
	[INFO] 10.244.0.3:49543 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040581s
	[INFO] 10.244.1.2:35077 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121149s
	[INFO] 10.244.1.2:53263 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000241449s
	[INFO] 10.244.1.2:41010 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147928s
	[INFO] 10.244.1.2:38095 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188619s
	[INFO] 10.244.0.3:40730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125004s
	[INFO] 10.244.0.3:60796 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121839s
	[INFO] 10.244.0.3:41629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109043s
	[INFO] 10.244.0.3:52087 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088488s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-253131
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-253131
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=multinode-253131
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_14_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:14:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-253131
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:24:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:20:55 +0000   Thu, 25 Jul 2024 18:14:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    multinode-253131
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6d6d4c867ba4a5d817cd83a319b5b8c
	  System UUID:                d6d6d4c8-67ba-4a5d-817c-d83a319b5b8c
	  Boot ID:                    f0bb354f-9a8c-4409-83f9-236961443b72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gfbkg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 coredns-7db6d8ff4d-6lrr5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-253131                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-hvwf2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-253131             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-253131    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zgrbq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-253131             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-253131 event: Registered Node multinode-253131 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-253131 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-253131 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-253131 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-253131 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                node-controller  Node multinode-253131 event: Registered Node multinode-253131 in Controller
	
	
	Name:               multinode-253131-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-253131-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=multinode-253131
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_25T18_21_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:21:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-253131-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:22:36 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:23:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:23:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:23:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Jul 2024 18:22:05 +0000   Thu, 25 Jul 2024 18:23:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    multinode-253131-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ee4045f58eb48c4aa38ef605c6033e3
	  System UUID:                6ee4045f-58eb-48c4-aa38-ef605c6033e3
	  Boot ID:                    c7b54ae9-5cca-4a0c-b9e7-a523e34cc176
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9c2k9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-zd9dg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m51s
	  kube-system                 kube-proxy-rhvxz           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m51s (x2 over 9m51s)  kubelet          Node multinode-253131-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m51s (x2 over 9m51s)  kubelet          Node multinode-253131-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m51s (x2 over 9m51s)  kubelet          Node multinode-253131-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-253131-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-253131-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-253131-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-253131-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-253131-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-253131-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.067914] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063994] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.208517] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.135063] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.253734] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[Jul25 18:14] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +3.850901] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +0.068145] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.016039] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.083892] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.127659] systemd-fstab-generator[1472]: Ignoring "noauto" option for root device
	[  +0.121732] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.012814] kauditd_printk_skb: 59 callbacks suppressed
	[Jul25 18:15] kauditd_printk_skb: 12 callbacks suppressed
	[Jul25 18:20] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.144900] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.155896] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.135666] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.262065] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +1.476617] systemd-fstab-generator[2960]: Ignoring "noauto" option for root device
	[  +2.254542] systemd-fstab-generator[3146]: Ignoring "noauto" option for root device
	[  +0.825036] kauditd_printk_skb: 149 callbacks suppressed
	[Jul25 18:21] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.110764] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +20.059547] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2c878462f2ec411c0110e92c5f3317eeecd0aabafa75a223cb0bfae5ab7e4879] <==
	{"level":"info","ts":"2024-07-25T18:15:18.202255Z","caller":"traceutil/trace.go:171","msg":"trace[1132435649] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:521; }","duration":"202.262362ms","start":"2024-07-25T18:15:17.999966Z","end":"2024-07-25T18:15:18.202229Z","steps":["trace[1132435649] 'agreement among raft nodes before linearized reading'  (duration: 201.564303ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.201712Z","caller":"traceutil/trace.go:171","msg":"trace[1125777536] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"222.960569ms","start":"2024-07-25T18:15:17.978736Z","end":"2024-07-25T18:15:18.201697Z","steps":["trace[1125777536] 'process raft request'  (duration: 222.520388ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.455669Z","caller":"traceutil/trace.go:171","msg":"trace[36567743] linearizableReadLoop","detail":"{readStateIndex:546; appliedIndex:545; }","duration":"179.486757ms","start":"2024-07-25T18:15:18.276168Z","end":"2024-07-25T18:15:18.455655Z","steps":["trace[36567743] 'read index received'  (duration: 113.770988ms)","trace[36567743] 'applied index is now lower than readState.Index'  (duration: 65.714927ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:15:18.455834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.6521ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m02\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-07-25T18:15:18.455924Z","caller":"traceutil/trace.go:171","msg":"trace[1693464086] range","detail":"{range_begin:/registry/minions/multinode-253131-m02; range_end:; response_count:1; response_revision:522; }","duration":"179.738515ms","start":"2024-07-25T18:15:18.276144Z","end":"2024-07-25T18:15:18.455883Z","steps":["trace[1693464086] 'agreement among raft nodes before linearized reading'  (duration: 179.57391ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:15:18.456132Z","caller":"traceutil/trace.go:171","msg":"trace[1945710325] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"248.038844ms","start":"2024-07-25T18:15:18.208083Z","end":"2024-07-25T18:15:18.456121Z","steps":["trace[1945710325] 'process raft request'  (duration: 181.957956ms)","trace[1945710325] 'compare'  (duration: 65.406256ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:16:02.000427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.344708ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7068277479603876475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-253131-m03.17e5877348105532\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-253131-m03.17e5877348105532\" value_size:646 lease:7068277479603876095 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T18:16:02.000646Z","caller":"traceutil/trace.go:171","msg":"trace[691096272] linearizableReadLoop","detail":"{readStateIndex:640; appliedIndex:638; }","duration":"107.800102ms","start":"2024-07-25T18:16:01.892823Z","end":"2024-07-25T18:16:02.000623Z","steps":["trace[691096272] 'read index received'  (duration: 105.489754ms)","trace[691096272] 'applied index is now lower than readState.Index'  (duration: 2.309459ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T18:16:02.000655Z","caller":"traceutil/trace.go:171","msg":"trace[1269904372] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"246.627151ms","start":"2024-07-25T18:16:01.754012Z","end":"2024-07-25T18:16:02.000639Z","steps":["trace[1269904372] 'process raft request'  (duration: 55.020776ms)","trace[1269904372] 'compare'  (duration: 191.171658ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T18:16:02.000768Z","caller":"traceutil/trace.go:171","msg":"trace[939474893] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"168.100067ms","start":"2024-07-25T18:16:01.832662Z","end":"2024-07-25T18:16:02.000762Z","steps":["trace[939474893] 'process raft request'  (duration: 167.913184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:16:02.001001Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.173093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-25T18:16:02.001037Z","caller":"traceutil/trace.go:171","msg":"trace[595896803] range","detail":"{range_begin:/registry/minions/multinode-253131-m03; range_end:; response_count:1; response_revision:607; }","duration":"108.234708ms","start":"2024-07-25T18:16:01.892796Z","end":"2024-07-25T18:16:02.00103Z","steps":["trace[595896803] 'agreement among raft nodes before linearized reading'  (duration: 108.065704ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:16:10.268984Z","caller":"traceutil/trace.go:171","msg":"trace[52826067] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"212.53219ms","start":"2024-07-25T18:16:10.056342Z","end":"2024-07-25T18:16:10.268874Z","steps":["trace[52826067] 'process raft request'  (duration: 212.427547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:16:10.62156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.934133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-253131-m03\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-07-25T18:16:10.621653Z","caller":"traceutil/trace.go:171","msg":"trace[1908371880] range","detail":"{range_begin:/registry/minions/multinode-253131-m03; range_end:; response_count:1; response_revision:650; }","duration":"116.072543ms","start":"2024-07-25T18:16:10.505565Z","end":"2024-07-25T18:16:10.621638Z","steps":["trace[1908371880] 'range keys from in-memory index tree'  (duration: 115.811209ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:19:16.100916Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-25T18:19:16.101039Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-253131","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	{"level":"warn","ts":"2024-07-25T18:19:16.101174Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.10128Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.182984Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:19:16.183026Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.54:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T18:19:16.183118Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"731f5c40d4af6217","current-leader-member-id":"731f5c40d4af6217"}
	{"level":"info","ts":"2024-07-25T18:19:16.188328Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:19:16.188619Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:19:16.188677Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-253131","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"]}
	
	
	==> etcd [95f1dc59987d9bd7f9758959b9c1ccebfb2c65f2c3ab1076e434fec0844f2942] <==
	{"level":"info","ts":"2024-07-25T18:20:53.264547Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:20:53.267204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 switched to configuration voters=(8295450472155669015)"}
	{"level":"info","ts":"2024-07-25T18:20:53.267288Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","added-peer-id":"731f5c40d4af6217","added-peer-peer-urls":["https://192.168.39.54:2380"]}
	{"level":"info","ts":"2024-07-25T18:20:53.267455Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ad335f297da439ca","local-member-id":"731f5c40d4af6217","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:20:53.267497Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:20:53.278649Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:20:53.27894Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"731f5c40d4af6217","initial-advertise-peer-urls":["https://192.168.39.54:2380"],"listen-peer-urls":["https://192.168.39.54:2380"],"advertise-client-urls":["https://192.168.39.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:20:53.278985Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:20:53.290371Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:20:53.290402Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.54:2380"}
	{"level":"info","ts":"2024-07-25T18:20:54.402958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.403015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.403066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgPreVoteResp from 731f5c40d4af6217 at term 2"}
	{"level":"info","ts":"2024-07-25T18:20:54.40308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 received MsgVoteResp from 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"731f5c40d4af6217 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.403117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 731f5c40d4af6217 elected leader 731f5c40d4af6217 at term 3"}
	{"level":"info","ts":"2024-07-25T18:20:54.408274Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"731f5c40d4af6217","local-member-attributes":"{Name:multinode-253131 ClientURLs:[https://192.168.39.54:2379]}","request-path":"/0/members/731f5c40d4af6217/attributes","cluster-id":"ad335f297da439ca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:20:54.408424Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:20:54.411708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.54:2379"}
	{"level":"info","ts":"2024-07-25T18:20:54.41272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:20:54.41458Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:20:54.416214Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:20:54.416242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:21:39.067852Z","caller":"traceutil/trace.go:171","msg":"trace[450221974] transaction","detail":"{read_only:false; response_revision:1062; number_of_response:1; }","duration":"163.238648ms","start":"2024-07-25T18:21:38.904568Z","end":"2024-07-25T18:21:39.067807Z","steps":["trace[450221974] 'process raft request'  (duration: 162.699262ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:24:59 up 11 min,  0 users,  load average: 0.10, 0.24, 0.14
	Linux multinode-253131 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [061828a7da84f56a395fbd33965b2f380017500e948065f9b7ae570ad2cc6e59] <==
	I0725 18:23:57.740362       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:07.745991       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:07.746120       1 main.go:299] handling current node
	I0725 18:24:07.746149       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:07.746168       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:17.747479       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:17.747605       1 main.go:299] handling current node
	I0725 18:24:17.747644       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:17.747663       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:27.739977       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:27.740210       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:27.740427       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:27.740465       1 main.go:299] handling current node
	I0725 18:24:37.745289       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:37.745408       1 main.go:299] handling current node
	I0725 18:24:37.745443       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:37.745465       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:47.748475       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:47.748526       1 main.go:299] handling current node
	I0725 18:24:47.748549       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:47.748555       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:24:57.740219       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:24:57.740278       1 main.go:299] handling current node
	I0725 18:24:57.740312       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:24:57.740325       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fd663c148a6196bf15de5d7b0cb5ba908e6e1db8cece0c6afcee1110ecb94e5e] <==
	I0725 18:18:30.948677       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:40.956249       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:18:40.956360       1 main.go:299] handling current node
	I0725 18:18:40.956395       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:18:40.956417       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:18:40.956663       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:18:40.956700       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:50.957205       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:18:50.957263       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:18:50.957414       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:18:50.957433       1 main.go:299] handling current node
	I0725 18:18:50.957452       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:18:50.957456       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:00.957186       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:19:00.957351       1 main.go:299] handling current node
	I0725 18:19:00.957391       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:19:00.957415       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:00.957597       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:19:00.957674       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	I0725 18:19:10.956421       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0725 18:19:10.956545       1 main.go:299] handling current node
	I0725 18:19:10.956586       1 main.go:295] Handling node with IPs: map[192.168.39.179:{}]
	I0725 18:19:10.956609       1 main.go:322] Node multinode-253131-m02 has CIDR [10.244.1.0/24] 
	I0725 18:19:10.956834       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0725 18:19:10.956876       1 main.go:322] Node multinode-253131-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [268ab6f9cddbcf651bef9126162a4114e8a3fc2ba7fb242dd8466a171e38a3a3] <==
	I0725 18:20:55.694098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 18:20:55.694198       1 policy_source.go:224] refreshing policies
	I0725 18:20:55.701755       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:20:55.720087       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 18:20:55.720128       1 aggregator.go:165] initial CRD sync complete...
	I0725 18:20:55.720152       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 18:20:55.720158       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 18:20:55.720163       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:20:55.779568       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 18:20:55.786099       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 18:20:55.786979       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:20:55.787585       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 18:20:55.788048       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:20:55.788451       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 18:20:55.788491       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0725 18:20:55.792991       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 18:20:55.793720       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0725 18:20:56.598464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:20:57.632361       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 18:20:57.796247       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 18:20:57.816507       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 18:20:57.875324       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:20:57.882004       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:21:08.188531       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 18:21:08.211681       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a7e2ce3e3194e7be75b77e4e9270ec2bf58de926a874dfef9bba45143a038601] <==
	W0725 18:19:16.124249       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124275       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124299       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124329       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124386       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124419       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124445       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124475       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124506       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124532       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124559       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124590       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124622       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124646       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124670       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124698       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124724       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124748       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124773       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.124826       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.128798       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.128986       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:19:16.129255       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0725 18:19:16.130759       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0725 18:19:16.132521       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [43b7d2bfc585bd62a1000712c61c55365116c5d2c7ff8e1119e92d8d00f03d90] <==
	I0725 18:21:34.675078       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m02" podCIDRs=["10.244.1.0/24"]
	I0725 18:21:36.548931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.606µs"
	I0725 18:21:36.594389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.503µs"
	I0725 18:21:36.605002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.415µs"
	I0725 18:21:36.607490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.502µs"
	I0725 18:21:36.612412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.327µs"
	I0725 18:21:36.613792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.346µs"
	I0725 18:21:39.071707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.427µs"
	I0725 18:21:54.384302       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:21:54.403008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.072µs"
	I0725 18:21:54.415804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.1µs"
	I0725 18:21:58.032222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.009519ms"
	I0725 18:21:58.033460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="873.427µs"
	I0725 18:22:12.803792       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:22:13.835221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:22:13.836225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:22:13.843063       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.2.0/24"]
	I0725 18:22:33.168310       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:22:38.385851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:23:18.299215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.736198ms"
	I0725 18:23:18.303998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.07µs"
	I0725 18:23:28.177830       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-4hhvf"
	I0725 18:23:28.202795       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-4hhvf"
	I0725 18:23:28.202839       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-st44z"
	I0725 18:23:28.227725       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-st44z"
	
	
	==> kube-controller-manager [79df99dc269c4dfd2f054c28d69e846e4f6bffedfbff350e940b832e0d3cdbb7] <==
	I0725 18:15:08.781329       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m02\" does not exist"
	I0725 18:15:08.822203       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m02" podCIDRs=["10.244.1.0/24"]
	I0725 18:15:09.654797       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-253131-m02"
	I0725 18:15:28.691930       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:15:30.839196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.863807ms"
	I0725 18:15:30.864828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.444272ms"
	I0725 18:15:30.878766       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.69927ms"
	I0725 18:15:30.878876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.7µs"
	I0725 18:15:35.034876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.518022ms"
	I0725 18:15:35.035278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.371µs"
	I0725 18:15:35.221477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.110118ms"
	I0725 18:15:35.222249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.735µs"
	I0725 18:16:02.006186       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:16:02.006387       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:02.074144       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.2.0/24"]
	I0725 18:16:04.678154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-253131-m03"
	I0725 18:16:22.489416       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:50.296658       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:51.329821       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:16:51.330465       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-253131-m03\" does not exist"
	I0725 18:16:51.342917       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-253131-m03" podCIDRs=["10.244.3.0/24"]
	I0725 18:17:10.723060       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m02"
	I0725 18:17:54.730318       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-253131-m03"
	I0725 18:17:54.792496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.287274ms"
	I0725 18:17:54.792576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.962µs"
	
	
	==> kube-proxy [393e599ba938637cdcd3961658df9b28c7bd8583b2e33e6a732e24d863ea0ee3] <==
	I0725 18:14:26.682140       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:14:26.697155       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0725 18:14:26.776390       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:14:26.776450       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:14:26.776475       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:14:26.782758       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:14:26.783208       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:14:26.783532       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:14:26.786764       1 config.go:192] "Starting service config controller"
	I0725 18:14:26.787015       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:14:26.787573       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:14:26.787625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:14:26.791200       1 config.go:319] "Starting node config controller"
	I0725 18:14:26.791242       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:14:26.887986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:14:26.887997       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:14:26.891475       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9c30dbb647c586f95a4c05eee6aa5b984946e5a5ca47340f7e75e10d48037ba2] <==
	I0725 18:20:56.889837       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:20:56.902455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.54"]
	I0725 18:20:56.962942       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:20:56.962979       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:20:56.962995       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:20:56.974638       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:20:56.976018       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:20:56.976873       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:20:56.978871       1 config.go:192] "Starting service config controller"
	I0725 18:20:56.979664       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:20:56.979744       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:20:56.979763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:20:56.980238       1 config.go:319] "Starting node config controller"
	I0725 18:20:56.980275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:20:57.080823       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:20:57.080969       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:20:57.080978       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [28021ff9ef2d557d35cec4bfc59ea6ae47421558c6d1523ba2f8e677e2fe7a19] <==
	E0725 18:14:09.427813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:09.427869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:09.427932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:09.427941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:09.427949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.279873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.279950       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.324670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:14:10.324745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:14:10.342879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:14:10.342984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:14:10.384120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:14:10.384214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:14:10.447105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.447147       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.465919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 18:14:10.467353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 18:14:10.488393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 18:14:10.488436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0725 18:14:10.594472       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 18:14:10.594547       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:14:10.723094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 18:14:10.723211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0725 18:14:13.318554       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0725 18:19:16.112063       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [33f021a008e5e0fd1e671a5223b290e37dc6b833fcc88ecae62a13f7d7c52803] <==
	W0725 18:20:55.687337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:20:55.687363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:20:55.687406       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 18:20:55.687429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 18:20:55.687564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 18:20:55.687589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 18:20:55.693157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 18:20:55.693192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 18:20:55.693280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.693340       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.693465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 18:20:55.693571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 18:20:55.693653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:20:55.693677       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:20:55.693720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:20:55.693742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:20:55.693584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:20:55.694672       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:20:55.693532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 18:20:55.694732       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0725 18:20:55.704161       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 18:20:55.704194       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0725 18:20:56.769758       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239286    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bd539cc-9683-496d-9aea-545539fcf647-xtables-lock\") pod \"kube-proxy-zgrbq\" (UID: \"5bd539cc-9683-496d-9aea-545539fcf647\") " pod="kube-system/kube-proxy-zgrbq"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239363    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/388889a4-653d-4351-b19e-454285b56dd5-tmp\") pod \"storage-provisioner\" (UID: \"388889a4-653d-4351-b19e-454285b56dd5\") " pod="kube-system/storage-provisioner"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.239412    3153 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bd539cc-9683-496d-9aea-545539fcf647-lib-modules\") pod \"kube-proxy-zgrbq\" (UID: \"5bd539cc-9683-496d-9aea-545539fcf647\") " pod="kube-system/kube-proxy-zgrbq"
	Jul 25 18:20:56 multinode-253131 kubelet[3153]: I0725 18:20:56.407285    3153 scope.go:117] "RemoveContainer" containerID="92575c8e1c68f655f813cdd1e28dcea65f45b93bcfdbbf7322cb9f5b263126bf"
	Jul 25 18:21:02 multinode-253131 kubelet[3153]: I0725 18:21:02.537337    3153 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 25 18:21:52 multinode-253131 kubelet[3153]: E0725 18:21:52.203130    3153 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:21:52 multinode-253131 kubelet[3153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 18:22:52 multinode-253131 kubelet[3153]: E0725 18:22:52.191266    3153 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:22:52 multinode-253131 kubelet[3153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:22:52 multinode-253131 kubelet[3153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:22:52 multinode-253131 kubelet[3153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:22:52 multinode-253131 kubelet[3153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 18:23:52 multinode-253131 kubelet[3153]: E0725 18:23:52.191728    3153 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:23:52 multinode-253131 kubelet[3153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:23:52 multinode-253131 kubelet[3153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:23:52 multinode-253131 kubelet[3153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:23:52 multinode-253131 kubelet[3153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 18:24:52 multinode-253131 kubelet[3153]: E0725 18:24:52.190671    3153 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 18:24:52 multinode-253131 kubelet[3153]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 18:24:52 multinode-253131 kubelet[3153]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 18:24:52 multinode-253131 kubelet[3153]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 18:24:52 multinode-253131 kubelet[3153]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:24:59.113100   44178 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19326-5877/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-253131 -n multinode-253131
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-253131 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.12s)

                                                
                                    
x
+
TestPreload (181.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-062807 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0725 18:29:12.057071   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-062807 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m43.217801206s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-062807 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-062807 image pull gcr.io/k8s-minikube/busybox: (2.600774687s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-062807
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-062807: (6.600446319s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-062807 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0725 18:31:41.640020   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-062807 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.799384974s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-062807 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-25 18:31:46.217490769 +0000 UTC m=+3780.298222639
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-062807 -n test-preload-062807
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-062807 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131 sudo cat                                       | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt                       | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m02:/home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n                                                                 | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | multinode-253131-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-253131 ssh -n multinode-253131-m02 sudo cat                                   | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	|         | /home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-253131 node stop m03                                                          | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:16 UTC |
	| node    | multinode-253131 node start                                                             | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:16 UTC | 25 Jul 24 18:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| stop    | -p multinode-253131                                                                     | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:17 UTC |                     |
	| start   | -p multinode-253131                                                                     | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:19 UTC | 25 Jul 24 18:22 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC |                     |
	| node    | multinode-253131 node delete                                                            | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC | 25 Jul 24 18:22 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-253131 stop                                                                   | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:22 UTC |                     |
	| start   | -p multinode-253131                                                                     | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:25 UTC | 25 Jul 24 18:28 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-253131                                                                | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC |                     |
	| start   | -p multinode-253131-m02                                                                 | multinode-253131-m02 | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-253131-m03                                                                 | multinode-253131-m03 | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC | 25 Jul 24 18:28 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-253131                                                                 | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC |                     |
	| delete  | -p multinode-253131-m03                                                                 | multinode-253131-m03 | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC | 25 Jul 24 18:28 UTC |
	| delete  | -p multinode-253131                                                                     | multinode-253131     | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC | 25 Jul 24 18:28 UTC |
	| start   | -p test-preload-062807                                                                  | test-preload-062807  | jenkins | v1.33.1 | 25 Jul 24 18:28 UTC | 25 Jul 24 18:30 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-062807 image pull                                                          | test-preload-062807  | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:30 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-062807                                                                  | test-preload-062807  | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:30 UTC |
	| start   | -p test-preload-062807                                                                  | test-preload-062807  | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:31 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-062807 image list                                                          | test-preload-062807  | jenkins | v1.33.1 | 25 Jul 24 18:31 UTC | 25 Jul 24 18:31 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:30:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:30:40.253092   46540 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:30:40.253340   46540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:30:40.253349   46540 out.go:304] Setting ErrFile to fd 2...
	I0725 18:30:40.253352   46540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:30:40.253517   46540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:30:40.254003   46540 out.go:298] Setting JSON to false
	I0725 18:30:40.254849   46540 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4384,"bootTime":1721927856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:30:40.254900   46540 start.go:139] virtualization: kvm guest
	I0725 18:30:40.257159   46540 out.go:177] * [test-preload-062807] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:30:40.258683   46540 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:30:40.258716   46540 notify.go:220] Checking for updates...
	I0725 18:30:40.261351   46540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:30:40.262575   46540 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:30:40.263887   46540 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:30:40.265142   46540 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:30:40.266265   46540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:30:40.267753   46540 config.go:182] Loaded profile config "test-preload-062807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0725 18:30:40.268176   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:30:40.268249   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:30:40.282653   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0725 18:30:40.283058   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:30:40.283564   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:30:40.283586   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:30:40.283897   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:30:40.284076   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:30:40.285678   46540 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 18:30:40.286775   46540 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:30:40.287054   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:30:40.287084   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:30:40.301244   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35729
	I0725 18:30:40.301615   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:30:40.302086   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:30:40.302106   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:30:40.302392   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:30:40.302570   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:30:40.336102   46540 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:30:40.337303   46540 start.go:297] selected driver: kvm2
	I0725 18:30:40.337317   46540 start.go:901] validating driver "kvm2" against &{Name:test-preload-062807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-062807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:30:40.337434   46540 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:30:40.338316   46540 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:30:40.338394   46540 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:30:40.352753   46540 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:30:40.353190   46540 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:30:40.353259   46540 cni.go:84] Creating CNI manager for ""
	I0725 18:30:40.353276   46540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:30:40.353361   46540 start.go:340] cluster config:
	{Name:test-preload-062807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-062807 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:30:40.353504   46540 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:30:40.355222   46540 out.go:177] * Starting "test-preload-062807" primary control-plane node in "test-preload-062807" cluster
	I0725 18:30:40.356295   46540 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0725 18:30:40.824745   46540 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0725 18:30:40.824783   46540 cache.go:56] Caching tarball of preloaded images
	I0725 18:30:40.824934   46540 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0725 18:30:40.826726   46540 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0725 18:30:40.827874   46540 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0725 18:30:40.924880   46540 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0725 18:30:52.089635   46540 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0725 18:30:52.089735   46540 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0725 18:30:52.930924   46540 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0725 18:30:52.931062   46540 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/config.json ...
	I0725 18:30:52.931317   46540 start.go:360] acquireMachinesLock for test-preload-062807: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:30:52.931387   46540 start.go:364] duration metric: took 47.678µs to acquireMachinesLock for "test-preload-062807"
	I0725 18:30:52.931400   46540 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:30:52.931409   46540 fix.go:54] fixHost starting: 
	I0725 18:30:52.931748   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:30:52.931786   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:30:52.946031   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36561
	I0725 18:30:52.946467   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:30:52.946931   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:30:52.946963   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:30:52.947262   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:30:52.947450   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:30:52.947574   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetState
	I0725 18:30:52.949180   46540 fix.go:112] recreateIfNeeded on test-preload-062807: state=Stopped err=<nil>
	I0725 18:30:52.949222   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	W0725 18:30:52.949393   46540 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:30:52.952047   46540 out.go:177] * Restarting existing kvm2 VM for "test-preload-062807" ...
	I0725 18:30:52.953364   46540 main.go:141] libmachine: (test-preload-062807) Calling .Start
	I0725 18:30:52.953492   46540 main.go:141] libmachine: (test-preload-062807) Ensuring networks are active...
	I0725 18:30:52.954234   46540 main.go:141] libmachine: (test-preload-062807) Ensuring network default is active
	I0725 18:30:52.954537   46540 main.go:141] libmachine: (test-preload-062807) Ensuring network mk-test-preload-062807 is active
	I0725 18:30:52.954843   46540 main.go:141] libmachine: (test-preload-062807) Getting domain xml...
	I0725 18:30:52.955532   46540 main.go:141] libmachine: (test-preload-062807) Creating domain...
	I0725 18:30:54.130111   46540 main.go:141] libmachine: (test-preload-062807) Waiting to get IP...
	I0725 18:30:54.130905   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:54.131195   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:54.131273   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:54.131198   46608 retry.go:31] will retry after 245.629517ms: waiting for machine to come up
	I0725 18:30:54.378604   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:54.379053   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:54.379081   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:54.379013   46608 retry.go:31] will retry after 385.856511ms: waiting for machine to come up
	I0725 18:30:54.766633   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:54.766999   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:54.767024   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:54.766948   46608 retry.go:31] will retry after 487.751208ms: waiting for machine to come up
	I0725 18:30:55.256559   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:55.256943   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:55.256971   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:55.256898   46608 retry.go:31] will retry after 533.241091ms: waiting for machine to come up
	I0725 18:30:55.791630   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:55.792059   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:55.792083   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:55.792020   46608 retry.go:31] will retry after 539.496178ms: waiting for machine to come up
	I0725 18:30:56.332699   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:56.333127   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:56.333159   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:56.333075   46608 retry.go:31] will retry after 866.268126ms: waiting for machine to come up
	I0725 18:30:57.201075   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:57.201420   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:57.201449   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:57.201376   46608 retry.go:31] will retry after 1.167707323s: waiting for machine to come up
	I0725 18:30:58.371052   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:58.371413   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:58.371441   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:58.371360   46608 retry.go:31] will retry after 1.370414798s: waiting for machine to come up
	I0725 18:30:59.743884   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:30:59.744260   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:30:59.744287   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:30:59.744217   46608 retry.go:31] will retry after 1.444120788s: waiting for machine to come up
	I0725 18:31:01.190869   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:01.191321   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:31:01.191351   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:31:01.191267   46608 retry.go:31] will retry after 2.312839188s: waiting for machine to come up
	I0725 18:31:03.505217   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:03.505684   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:31:03.505714   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:31:03.505632   46608 retry.go:31] will retry after 2.556368997s: waiting for machine to come up
	I0725 18:31:06.065350   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:06.065786   46540 main.go:141] libmachine: (test-preload-062807) DBG | unable to find current IP address of domain test-preload-062807 in network mk-test-preload-062807
	I0725 18:31:06.065809   46540 main.go:141] libmachine: (test-preload-062807) DBG | I0725 18:31:06.065741   46608 retry.go:31] will retry after 2.493385771s: waiting for machine to come up
	I0725 18:31:08.560603   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.561077   46540 main.go:141] libmachine: (test-preload-062807) Found IP for machine: 192.168.39.203
	I0725 18:31:08.561110   46540 main.go:141] libmachine: (test-preload-062807) Reserving static IP address...
	I0725 18:31:08.561128   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has current primary IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.561465   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "test-preload-062807", mac: "52:54:00:dd:29:a7", ip: "192.168.39.203"} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.561493   46540 main.go:141] libmachine: (test-preload-062807) DBG | skip adding static IP to network mk-test-preload-062807 - found existing host DHCP lease matching {name: "test-preload-062807", mac: "52:54:00:dd:29:a7", ip: "192.168.39.203"}
	I0725 18:31:08.561511   46540 main.go:141] libmachine: (test-preload-062807) Reserved static IP address: 192.168.39.203
	I0725 18:31:08.561526   46540 main.go:141] libmachine: (test-preload-062807) Waiting for SSH to be available...
	I0725 18:31:08.561541   46540 main.go:141] libmachine: (test-preload-062807) DBG | Getting to WaitForSSH function...
	I0725 18:31:08.563390   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.563700   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.563733   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.563848   46540 main.go:141] libmachine: (test-preload-062807) DBG | Using SSH client type: external
	I0725 18:31:08.563878   46540 main.go:141] libmachine: (test-preload-062807) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa (-rw-------)
	I0725 18:31:08.563920   46540 main.go:141] libmachine: (test-preload-062807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:31:08.563933   46540 main.go:141] libmachine: (test-preload-062807) DBG | About to run SSH command:
	I0725 18:31:08.563942   46540 main.go:141] libmachine: (test-preload-062807) DBG | exit 0
	I0725 18:31:08.687993   46540 main.go:141] libmachine: (test-preload-062807) DBG | SSH cmd err, output: <nil>: 
	I0725 18:31:08.688355   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetConfigRaw
	I0725 18:31:08.688977   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetIP
	I0725 18:31:08.691698   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.692113   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.692149   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.692374   46540 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/config.json ...
	I0725 18:31:08.692579   46540 machine.go:94] provisionDockerMachine start ...
	I0725 18:31:08.692606   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:08.692829   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:08.694903   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.695215   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.695243   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.695378   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:08.695560   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.695713   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.695863   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:08.696012   46540 main.go:141] libmachine: Using SSH client type: native
	I0725 18:31:08.696240   46540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0725 18:31:08.696254   46540 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:31:08.796450   46540 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:31:08.796479   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetMachineName
	I0725 18:31:08.796755   46540 buildroot.go:166] provisioning hostname "test-preload-062807"
	I0725 18:31:08.796779   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetMachineName
	I0725 18:31:08.796949   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:08.799517   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.799896   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.799930   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.800057   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:08.800234   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.800397   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.800538   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:08.800723   46540 main.go:141] libmachine: Using SSH client type: native
	I0725 18:31:08.800886   46540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0725 18:31:08.800899   46540 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-062807 && echo "test-preload-062807" | sudo tee /etc/hostname
	I0725 18:31:08.917739   46540 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-062807
	
	I0725 18:31:08.917769   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:08.920579   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.920889   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:08.920920   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:08.921149   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:08.921340   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.921578   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:08.921721   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:08.921893   46540 main.go:141] libmachine: Using SSH client type: native
	I0725 18:31:08.922066   46540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0725 18:31:08.922087   46540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-062807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-062807/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-062807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:31:09.031968   46540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:31:09.032000   46540 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:31:09.032017   46540 buildroot.go:174] setting up certificates
	I0725 18:31:09.032026   46540 provision.go:84] configureAuth start
	I0725 18:31:09.032034   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetMachineName
	I0725 18:31:09.032347   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetIP
	I0725 18:31:09.035079   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.035453   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.035487   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.035654   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.038142   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.038447   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.038475   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.038633   46540 provision.go:143] copyHostCerts
	I0725 18:31:09.038692   46540 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:31:09.038703   46540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:31:09.038767   46540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:31:09.038852   46540 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:31:09.038860   46540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:31:09.038883   46540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:31:09.038941   46540 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:31:09.038949   46540 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:31:09.038968   46540 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:31:09.039026   46540 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.test-preload-062807 san=[127.0.0.1 192.168.39.203 localhost minikube test-preload-062807]
	I0725 18:31:09.190150   46540 provision.go:177] copyRemoteCerts
	I0725 18:31:09.190209   46540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:31:09.190233   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.192997   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.193357   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.193401   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.193561   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.193722   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.193862   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.193962   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:09.273958   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:31:09.295950   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0725 18:31:09.317206   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:31:09.339190   46540 provision.go:87] duration metric: took 307.150954ms to configureAuth
	I0725 18:31:09.339220   46540 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:31:09.339409   46540 config.go:182] Loaded profile config "test-preload-062807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0725 18:31:09.339517   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.342019   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.342376   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.342402   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.342635   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.342807   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.342969   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.343100   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.343252   46540 main.go:141] libmachine: Using SSH client type: native
	I0725 18:31:09.343404   46540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0725 18:31:09.343419   46540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:31:09.626220   46540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:31:09.626249   46540 machine.go:97] duration metric: took 933.65007ms to provisionDockerMachine
	I0725 18:31:09.626264   46540 start.go:293] postStartSetup for "test-preload-062807" (driver="kvm2")
	I0725 18:31:09.626275   46540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:31:09.626295   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:09.626597   46540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:31:09.626624   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.629125   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.629399   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.629427   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.629613   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.629845   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.630007   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.630141   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:09.710392   46540 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:31:09.714472   46540 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:31:09.714493   46540 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:31:09.714568   46540 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:31:09.714635   46540 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:31:09.714717   46540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:31:09.723590   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:31:09.746589   46540 start.go:296] duration metric: took 120.312759ms for postStartSetup
	I0725 18:31:09.746635   46540 fix.go:56] duration metric: took 16.815222261s for fixHost
	I0725 18:31:09.746659   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.749244   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.749610   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.749639   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.749734   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.749924   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.750096   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.750243   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.750426   46540 main.go:141] libmachine: Using SSH client type: native
	I0725 18:31:09.750603   46540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I0725 18:31:09.750616   46540 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:31:09.852468   46540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932269.825685270
	
	I0725 18:31:09.852497   46540 fix.go:216] guest clock: 1721932269.825685270
	I0725 18:31:09.852508   46540 fix.go:229] Guest: 2024-07-25 18:31:09.82568527 +0000 UTC Remote: 2024-07-25 18:31:09.746639734 +0000 UTC m=+29.526341502 (delta=79.045536ms)
	I0725 18:31:09.852535   46540 fix.go:200] guest clock delta is within tolerance: 79.045536ms
	I0725 18:31:09.852547   46540 start.go:83] releasing machines lock for "test-preload-062807", held for 16.921152128s
	I0725 18:31:09.852575   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:09.852837   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetIP
	I0725 18:31:09.855360   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.855634   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.855662   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.855811   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:09.856334   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:09.856507   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:09.856612   46540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:31:09.856662   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.856734   46540 ssh_runner.go:195] Run: cat /version.json
	I0725 18:31:09.856761   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:09.859259   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.859588   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.859616   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.859644   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.859737   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.859903   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.859953   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:09.859977   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:09.860056   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.860128   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:09.860215   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:09.860365   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:09.860514   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:09.860655   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:09.979626   46540 ssh_runner.go:195] Run: systemctl --version
	I0725 18:31:09.985275   46540 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:31:10.128122   46540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:31:10.133627   46540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:31:10.133694   46540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:31:10.149521   46540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:31:10.149551   46540 start.go:495] detecting cgroup driver to use...
	I0725 18:31:10.149616   46540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:31:10.165205   46540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:31:10.178902   46540 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:31:10.178960   46540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:31:10.192510   46540 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:31:10.204314   46540 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:31:10.313136   46540 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:31:10.453375   46540 docker.go:233] disabling docker service ...
	I0725 18:31:10.453441   46540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:31:10.467368   46540 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:31:10.482606   46540 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:31:10.627368   46540 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:31:10.743178   46540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:31:10.757118   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:31:10.774097   46540 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0725 18:31:10.774172   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.784013   46540 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:31:10.784076   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.793695   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.803095   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.812430   46540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:31:10.822274   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.831429   46540 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.846440   46540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:31:10.855811   46540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:31:10.864289   46540 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:31:10.864349   46540 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:31:10.876671   46540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:31:10.885990   46540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:31:10.998517   46540 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:31:11.128760   46540 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:31:11.128820   46540 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:31:11.133645   46540 start.go:563] Will wait 60s for crictl version
	I0725 18:31:11.133692   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:11.136978   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:31:11.170558   46540 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:31:11.170629   46540 ssh_runner.go:195] Run: crio --version
	I0725 18:31:11.196378   46540 ssh_runner.go:195] Run: crio --version
	I0725 18:31:11.224895   46540 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0725 18:31:11.226213   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetIP
	I0725 18:31:11.228832   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:11.229205   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:11.229238   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:11.229427   46540 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:31:11.233132   46540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:31:11.245242   46540 kubeadm.go:883] updating cluster {Name:test-preload-062807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-062807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:31:11.245363   46540 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0725 18:31:11.245408   46540 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:31:11.280277   46540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0725 18:31:11.280355   46540 ssh_runner.go:195] Run: which lz4
	I0725 18:31:11.283868   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:31:11.287508   46540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:31:11.287535   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0725 18:31:12.584593   46540 crio.go:462] duration metric: took 1.300759257s to copy over tarball
	I0725 18:31:12.584654   46540 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:31:14.875572   46540 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.290889448s)
	I0725 18:31:14.875606   46540 crio.go:469] duration metric: took 2.290990655s to extract the tarball
	I0725 18:31:14.875616   46540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:31:14.917681   46540 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:31:14.965887   46540 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0725 18:31:14.965910   46540 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:31:14.965960   46540 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:31:14.965997   46540 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0725 18:31:14.966039   46540 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0725 18:31:14.966020   46540 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 18:31:14.966041   46540 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0725 18:31:14.966018   46540 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0725 18:31:14.966063   46540 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0725 18:31:14.966067   46540 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 18:31:14.967360   46540 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 18:31:14.967418   46540 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0725 18:31:14.967433   46540 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0725 18:31:14.967360   46540 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0725 18:31:14.967359   46540 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0725 18:31:14.967376   46540 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0725 18:31:14.967375   46540 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 18:31:14.967555   46540 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:31:15.201131   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0725 18:31:15.202524   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0725 18:31:15.202644   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0725 18:31:15.208035   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 18:31:15.212446   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0725 18:31:15.227744   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0725 18:31:15.239213   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0725 18:31:15.294033   46540 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0725 18:31:15.294079   46540 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0725 18:31:15.294128   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.340732   46540 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0725 18:31:15.340777   46540 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 18:31:15.340813   46540 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0725 18:31:15.340828   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.340849   46540 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0725 18:31:15.341038   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.340868   46540 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0725 18:31:15.341087   46540 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 18:31:15.341120   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.362805   46540 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0725 18:31:15.362845   46540 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0725 18:31:15.362898   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.369014   46540 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0725 18:31:15.369057   46540 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0725 18:31:15.369112   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.370677   46540 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0725 18:31:15.370720   46540 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0725 18:31:15.370722   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0725 18:31:15.370839   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 18:31:15.370760   46540 ssh_runner.go:195] Run: which crictl
	I0725 18:31:15.370771   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0725 18:31:15.370884   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0725 18:31:15.370927   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0725 18:31:15.373722   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0725 18:31:15.479625   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0725 18:31:15.479689   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0725 18:31:15.479732   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0725 18:31:15.479798   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0725 18:31:15.492968   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0725 18:31:15.493021   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0725 18:31:15.493055   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0725 18:31:15.493087   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0725 18:31:15.493109   46540 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0725 18:31:15.493150   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0725 18:31:15.493175   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0725 18:31:15.493218   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0725 18:31:15.493230   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0725 18:31:15.493231   46540 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0725 18:31:15.493249   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0725 18:31:15.493261   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0725 18:31:15.493232   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0725 18:31:15.858371   46540 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:31:18.765833   46540 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.272547598s)
	I0725 18:31:18.765874   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0725 18:31:18.765877   46540 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.272749249s)
	I0725 18:31:18.765895   46540 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0725 18:31:18.765921   46540 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0725 18:31:18.765941   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0725 18:31:18.765971   46540 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.27271974s)
	I0725 18:31:18.765990   46540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0725 18:31:18.766003   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0725 18:31:18.766015   46540 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (3.272941984s)
	I0725 18:31:18.766044   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0725 18:31:18.766043   46540 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (3.27293533s)
	I0725 18:31:18.766063   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0725 18:31:18.766066   46540 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.27275199s)
	I0725 18:31:18.766078   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0725 18:31:18.766107   46540 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.907711007s)
	I0725 18:31:18.772104   46540 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0725 18:31:19.615187   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0725 18:31:19.615228   46540 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0725 18:31:19.615268   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0725 18:31:20.460793   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0725 18:31:20.460848   46540 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0725 18:31:20.460910   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0725 18:31:20.598994   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0725 18:31:20.599046   46540 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0725 18:31:20.599114   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0725 18:31:20.938899   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0725 18:31:20.938949   46540 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0725 18:31:20.939017   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0725 18:31:21.376499   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0725 18:31:21.376553   46540 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0725 18:31:21.376623   46540 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0725 18:31:23.521248   46540 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.144601577s)
	I0725 18:31:23.521278   46540 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0725 18:31:23.521305   46540 cache_images.go:123] Successfully loaded all cached images
	I0725 18:31:23.521311   46540 cache_images.go:92] duration metric: took 8.555389346s to LoadCachedImages
	I0725 18:31:23.521321   46540 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.24.4 crio true true} ...
	I0725 18:31:23.521447   46540 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-062807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-062807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:31:23.521520   46540 ssh_runner.go:195] Run: crio config
	I0725 18:31:23.572178   46540 cni.go:84] Creating CNI manager for ""
	I0725 18:31:23.572204   46540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:31:23.572217   46540 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:31:23.572235   46540 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-062807 NodeName:test-preload-062807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:31:23.572400   46540 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-062807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:31:23.572475   46540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0725 18:31:23.581746   46540 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:31:23.581805   46540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:31:23.590981   46540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0725 18:31:23.605968   46540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:31:23.620551   46540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0725 18:31:23.635838   46540 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I0725 18:31:23.639257   46540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:31:23.649738   46540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:31:23.749213   46540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:31:23.764850   46540 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807 for IP: 192.168.39.203
	I0725 18:31:23.764875   46540 certs.go:194] generating shared ca certs ...
	I0725 18:31:23.764895   46540 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:23.765078   46540 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:31:23.765138   46540 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:31:23.765154   46540 certs.go:256] generating profile certs ...
	I0725 18:31:23.765269   46540 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/client.key
	I0725 18:31:23.765325   46540 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/apiserver.key.ed522945
	I0725 18:31:23.765362   46540 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/proxy-client.key
	I0725 18:31:23.765465   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:31:23.765492   46540 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:31:23.765502   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:31:23.765520   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:31:23.765547   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:31:23.765565   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:31:23.765612   46540 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:31:23.766212   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:31:23.798196   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:31:23.830610   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:31:23.859155   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:31:23.889638   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:31:23.913107   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:31:23.946176   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:31:23.976732   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:31:23.997854   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:31:24.019836   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:31:24.041715   46540 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:31:24.062450   46540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:31:24.077656   46540 ssh_runner.go:195] Run: openssl version
	I0725 18:31:24.083035   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:31:24.093156   46540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:31:24.097157   46540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:31:24.097204   46540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:31:24.102430   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:31:24.112499   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:31:24.122079   46540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:31:24.126022   46540 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:31:24.126074   46540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:31:24.131036   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:31:24.140569   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:31:24.150126   46540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:31:24.153960   46540 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:31:24.154010   46540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:31:24.158885   46540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:31:24.168353   46540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:31:24.172297   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:31:24.177630   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:31:24.182747   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:31:24.187930   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:31:24.193027   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:31:24.198312   46540 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:31:24.203437   46540 kubeadm.go:392] StartCluster: {Name:test-preload-062807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-062807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:31:24.203543   46540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:31:24.203600   46540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:31:24.243701   46540 cri.go:89] found id: ""
	I0725 18:31:24.243776   46540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:31:24.253279   46540 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:31:24.253339   46540 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:31:24.253389   46540 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:31:24.262317   46540 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:31:24.262716   46540 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-062807" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:31:24.262815   46540 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-062807" cluster setting kubeconfig missing "test-preload-062807" context setting]
	I0725 18:31:24.263069   46540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:24.263665   46540 kapi.go:59] client config for test-preload-062807: &rest.Config{Host:"https://192.168.39.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:31:24.264191   46540 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:31:24.272973   46540 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.203
	I0725 18:31:24.273005   46540 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:31:24.273014   46540 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:31:24.273056   46540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:31:24.305665   46540 cri.go:89] found id: ""
	I0725 18:31:24.305722   46540 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:31:24.320833   46540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:31:24.329850   46540 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:31:24.329869   46540 kubeadm.go:157] found existing configuration files:
	
	I0725 18:31:24.329909   46540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:31:24.338306   46540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:31:24.338364   46540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:31:24.347068   46540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:31:24.355521   46540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:31:24.355567   46540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:31:24.364136   46540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:31:24.372202   46540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:31:24.372245   46540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:31:24.380819   46540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:31:24.388902   46540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:31:24.388950   46540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:31:24.397359   46540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:31:24.405901   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:24.490653   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:25.156753   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:25.397805   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:25.486097   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:25.555474   46540 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:31:25.555562   46540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:31:26.056680   46540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:31:26.556405   46540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:31:26.575177   46540 api_server.go:72] duration metric: took 1.019705253s to wait for apiserver process to appear ...
	I0725 18:31:26.575201   46540 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:31:26.575218   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:26.575633   46540 api_server.go:269] stopped: https://192.168.39.203:8443/healthz: Get "https://192.168.39.203:8443/healthz": dial tcp 192.168.39.203:8443: connect: connection refused
	I0725 18:31:27.076279   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:30.674232   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:31:30.674284   46540 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:31:30.674304   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:30.697507   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:31:30.697533   46540 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:31:31.076097   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:31.081444   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:31:31.081468   46540 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:31:31.576120   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:31.583336   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:31:31.583360   46540 api_server.go:103] status: https://192.168.39.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:31:32.075972   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:32.081346   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0725 18:31:32.087052   46540 api_server.go:141] control plane version: v1.24.4
	I0725 18:31:32.087076   46540 api_server.go:131] duration metric: took 5.511869901s to wait for apiserver health ...
	I0725 18:31:32.087084   46540 cni.go:84] Creating CNI manager for ""
	I0725 18:31:32.087090   46540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:31:32.088878   46540 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:31:32.090139   46540 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:31:32.100314   46540 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:31:32.117340   46540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:31:32.129123   46540 system_pods.go:59] 7 kube-system pods found
	I0725 18:31:32.129156   46540 system_pods.go:61] "coredns-6d4b75cb6d-w9jg4" [aca89fb1-320d-48a7-bafb-2420d8d0ac29] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:31:32.129164   46540 system_pods.go:61] "etcd-test-preload-062807" [294d9123-052b-40f9-9e8c-f54a83142250] Running
	I0725 18:31:32.129174   46540 system_pods.go:61] "kube-apiserver-test-preload-062807" [56ac8d42-ff83-4698-8fe4-7598e479fecf] Running
	I0725 18:31:32.129181   46540 system_pods.go:61] "kube-controller-manager-test-preload-062807" [7c9bf23e-6366-4a93-b70f-bfd2d468ce28] Running
	I0725 18:31:32.129189   46540 system_pods.go:61] "kube-proxy-v75mr" [b061a189-fa58-47b3-88a7-261eeb02f88b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:31:32.129211   46540 system_pods.go:61] "kube-scheduler-test-preload-062807" [2d1a9ea2-3dfb-4d17-9679-39ed2db80ded] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:31:32.129225   46540 system_pods.go:61] "storage-provisioner" [af048abb-6a72-49ef-8942-cb7cc6c9aa68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:31:32.129236   46540 system_pods.go:74] duration metric: took 11.872091ms to wait for pod list to return data ...
	I0725 18:31:32.129248   46540 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:31:32.132210   46540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:31:32.132244   46540 node_conditions.go:123] node cpu capacity is 2
	I0725 18:31:32.132259   46540 node_conditions.go:105] duration metric: took 3.002768ms to run NodePressure ...
	I0725 18:31:32.132290   46540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:31:32.362775   46540 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:31:32.367295   46540 kubeadm.go:739] kubelet initialised
	I0725 18:31:32.367321   46540 kubeadm.go:740] duration metric: took 4.521309ms waiting for restarted kubelet to initialise ...
	I0725 18:31:32.367330   46540 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:32.377181   46540 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.385041   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.385067   46540 pod_ready.go:81] duration metric: took 7.8558ms for pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:32.385078   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.385085   46540 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.392403   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "etcd-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.392427   46540 pod_ready.go:81] duration metric: took 7.330127ms for pod "etcd-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:32.392438   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "etcd-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.392446   46540 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.397128   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "kube-apiserver-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.397149   46540 pod_ready.go:81] duration metric: took 4.662496ms for pod "kube-apiserver-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:32.397159   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "kube-apiserver-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.397167   46540 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.521237   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.521266   46540 pod_ready.go:81] duration metric: took 124.08771ms for pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:32.521277   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.521285   46540 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v75mr" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.921396   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "kube-proxy-v75mr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.921430   46540 pod_ready.go:81] duration metric: took 400.119232ms for pod "kube-proxy-v75mr" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:32.921441   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "kube-proxy-v75mr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:32.921450   46540 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:33.321259   46540 pod_ready.go:97] node "test-preload-062807" hosting pod "kube-scheduler-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:33.321287   46540 pod_ready.go:81] duration metric: took 399.828668ms for pod "kube-scheduler-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:33.321300   46540 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-062807" hosting pod "kube-scheduler-test-preload-062807" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:33.321309   46540 pod_ready.go:38] duration metric: took 953.969548ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:33.321331   46540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:31:33.332712   46540 ops.go:34] apiserver oom_adj: -16
	I0725 18:31:33.332733   46540 kubeadm.go:597] duration metric: took 9.079383071s to restartPrimaryControlPlane
	I0725 18:31:33.332743   46540 kubeadm.go:394] duration metric: took 9.129345457s to StartCluster
	I0725 18:31:33.332770   46540 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:33.332855   46540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:31:33.333426   46540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:33.333673   46540 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:31:33.333728   46540 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:31:33.333820   46540 addons.go:69] Setting storage-provisioner=true in profile "test-preload-062807"
	I0725 18:31:33.333827   46540 addons.go:69] Setting default-storageclass=true in profile "test-preload-062807"
	I0725 18:31:33.333851   46540 addons.go:234] Setting addon storage-provisioner=true in "test-preload-062807"
	W0725 18:31:33.333862   46540 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:31:33.333883   46540 config.go:182] Loaded profile config "test-preload-062807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0725 18:31:33.333889   46540 host.go:66] Checking if "test-preload-062807" exists ...
	I0725 18:31:33.333852   46540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-062807"
	I0725 18:31:33.334154   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:31:33.334183   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:31:33.334224   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:31:33.334263   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:31:33.335462   46540 out.go:177] * Verifying Kubernetes components...
	I0725 18:31:33.336746   46540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:31:33.349214   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0725 18:31:33.349620   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:31:33.350057   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:31:33.350076   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:31:33.350377   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:31:33.350555   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetState
	I0725 18:31:33.352999   46540 kapi.go:59] client config for test-preload-062807: &rest.Config{Host:"https://192.168.39.203:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/test-preload-062807/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:31:33.353330   46540 addons.go:234] Setting addon default-storageclass=true in "test-preload-062807"
	W0725 18:31:33.353348   46540 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:31:33.353350   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0725 18:31:33.353381   46540 host.go:66] Checking if "test-preload-062807" exists ...
	I0725 18:31:33.353742   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:31:33.353784   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:31:33.353812   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:31:33.354327   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:31:33.354355   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:31:33.354628   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:31:33.355047   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:31:33.355096   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:31:33.368138   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0725 18:31:33.368498   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0725 18:31:33.368538   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:31:33.368891   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:31:33.369047   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:31:33.369076   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:31:33.369285   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:31:33.369297   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:31:33.369416   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:31:33.369595   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:31:33.369735   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetState
	I0725 18:31:33.369959   46540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:31:33.369995   46540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:31:33.371082   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:33.373196   46540 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:31:33.374528   46540 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:31:33.374546   46540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:31:33.374560   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:33.377752   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:33.378201   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:33.378233   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:33.378500   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:33.378656   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:33.378790   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:33.378896   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:33.385423   46540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I0725 18:31:33.385836   46540 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:31:33.386383   46540 main.go:141] libmachine: Using API Version  1
	I0725 18:31:33.386407   46540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:31:33.386778   46540 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:31:33.386984   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetState
	I0725 18:31:33.388375   46540 main.go:141] libmachine: (test-preload-062807) Calling .DriverName
	I0725 18:31:33.388551   46540 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:31:33.388565   46540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:31:33.388582   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHHostname
	I0725 18:31:33.391297   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:33.391701   46540 main.go:141] libmachine: (test-preload-062807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:29:a7", ip: ""} in network mk-test-preload-062807: {Iface:virbr1 ExpiryTime:2024-07-25 19:29:01 +0000 UTC Type:0 Mac:52:54:00:dd:29:a7 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:test-preload-062807 Clientid:01:52:54:00:dd:29:a7}
	I0725 18:31:33.391723   46540 main.go:141] libmachine: (test-preload-062807) DBG | domain test-preload-062807 has defined IP address 192.168.39.203 and MAC address 52:54:00:dd:29:a7 in network mk-test-preload-062807
	I0725 18:31:33.391879   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHPort
	I0725 18:31:33.392046   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHKeyPath
	I0725 18:31:33.392175   46540 main.go:141] libmachine: (test-preload-062807) Calling .GetSSHUsername
	I0725 18:31:33.392336   46540 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/test-preload-062807/id_rsa Username:docker}
	I0725 18:31:33.512794   46540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:31:33.531245   46540 node_ready.go:35] waiting up to 6m0s for node "test-preload-062807" to be "Ready" ...
	I0725 18:31:33.584940   46540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:31:33.605747   46540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:31:34.555449   46540 main.go:141] libmachine: Making call to close driver server
	I0725 18:31:34.555476   46540 main.go:141] libmachine: (test-preload-062807) Calling .Close
	I0725 18:31:34.555763   46540 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:31:34.555783   46540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:31:34.555793   46540 main.go:141] libmachine: Making call to close driver server
	I0725 18:31:34.555802   46540 main.go:141] libmachine: (test-preload-062807) Calling .Close
	I0725 18:31:34.556036   46540 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:31:34.556049   46540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:31:34.565535   46540 main.go:141] libmachine: Making call to close driver server
	I0725 18:31:34.565565   46540 main.go:141] libmachine: (test-preload-062807) Calling .Close
	I0725 18:31:34.565792   46540 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:31:34.565811   46540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:31:34.565810   46540 main.go:141] libmachine: (test-preload-062807) DBG | Closing plugin on server side
	I0725 18:31:34.565827   46540 main.go:141] libmachine: Making call to close driver server
	I0725 18:31:34.565836   46540 main.go:141] libmachine: (test-preload-062807) Calling .Close
	I0725 18:31:34.566068   46540 main.go:141] libmachine: (test-preload-062807) DBG | Closing plugin on server side
	I0725 18:31:34.566101   46540 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:31:34.566111   46540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:31:34.574254   46540 main.go:141] libmachine: Making call to close driver server
	I0725 18:31:34.574271   46540 main.go:141] libmachine: (test-preload-062807) Calling .Close
	I0725 18:31:34.574538   46540 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:31:34.574557   46540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:31:34.574562   46540 main.go:141] libmachine: (test-preload-062807) DBG | Closing plugin on server side
	I0725 18:31:34.577022   46540 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 18:31:34.578187   46540 addons.go:510] duration metric: took 1.244465659s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0725 18:31:35.535862   46540 node_ready.go:53] node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:37.536227   46540 node_ready.go:53] node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:39.537419   46540 node_ready.go:53] node "test-preload-062807" has status "Ready":"False"
	I0725 18:31:41.535208   46540 node_ready.go:49] node "test-preload-062807" has status "Ready":"True"
	I0725 18:31:41.535238   46540 node_ready.go:38] duration metric: took 8.003956696s for node "test-preload-062807" to be "Ready" ...
	I0725 18:31:41.535249   46540 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:41.540030   46540 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:41.544640   46540 pod_ready.go:92] pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:41.544659   46540 pod_ready.go:81] duration metric: took 4.602126ms for pod "coredns-6d4b75cb6d-w9jg4" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:41.544666   46540 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:43.550407   46540 pod_ready.go:102] pod "etcd-test-preload-062807" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:44.549395   46540 pod_ready.go:92] pod "etcd-test-preload-062807" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:44.549416   46540 pod_ready.go:81] duration metric: took 3.004742945s for pod "etcd-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:44.549425   46540 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.057499   46540 pod_ready.go:92] pod "kube-apiserver-test-preload-062807" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:45.057522   46540 pod_ready.go:81] duration metric: took 508.090294ms for pod "kube-apiserver-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.057535   46540 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.063422   46540 pod_ready.go:92] pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:45.063444   46540 pod_ready.go:81] duration metric: took 5.896021ms for pod "kube-controller-manager-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.063457   46540 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v75mr" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.068206   46540 pod_ready.go:92] pod "kube-proxy-v75mr" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:45.068224   46540 pod_ready.go:81] duration metric: took 4.759813ms for pod "kube-proxy-v75mr" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.068232   46540 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.135979   46540 pod_ready.go:92] pod "kube-scheduler-test-preload-062807" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:45.136004   46540 pod_ready.go:81] duration metric: took 67.765779ms for pod "kube-scheduler-test-preload-062807" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:45.136014   46540 pod_ready.go:38] duration metric: took 3.600748048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:45.136025   46540 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:31:45.136071   46540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:31:45.150328   46540 api_server.go:72] duration metric: took 11.816605838s to wait for apiserver process to appear ...
	I0725 18:31:45.150354   46540 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:31:45.150380   46540 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I0725 18:31:45.156779   46540 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I0725 18:31:45.157844   46540 api_server.go:141] control plane version: v1.24.4
	I0725 18:31:45.157864   46540 api_server.go:131] duration metric: took 7.503788ms to wait for apiserver health ...
	I0725 18:31:45.157873   46540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:31:45.337797   46540 system_pods.go:59] 7 kube-system pods found
	I0725 18:31:45.337829   46540 system_pods.go:61] "coredns-6d4b75cb6d-w9jg4" [aca89fb1-320d-48a7-bafb-2420d8d0ac29] Running
	I0725 18:31:45.337834   46540 system_pods.go:61] "etcd-test-preload-062807" [294d9123-052b-40f9-9e8c-f54a83142250] Running
	I0725 18:31:45.337838   46540 system_pods.go:61] "kube-apiserver-test-preload-062807" [56ac8d42-ff83-4698-8fe4-7598e479fecf] Running
	I0725 18:31:45.337841   46540 system_pods.go:61] "kube-controller-manager-test-preload-062807" [7c9bf23e-6366-4a93-b70f-bfd2d468ce28] Running
	I0725 18:31:45.337844   46540 system_pods.go:61] "kube-proxy-v75mr" [b061a189-fa58-47b3-88a7-261eeb02f88b] Running
	I0725 18:31:45.337853   46540 system_pods.go:61] "kube-scheduler-test-preload-062807" [2d1a9ea2-3dfb-4d17-9679-39ed2db80ded] Running
	I0725 18:31:45.337857   46540 system_pods.go:61] "storage-provisioner" [af048abb-6a72-49ef-8942-cb7cc6c9aa68] Running
	I0725 18:31:45.337869   46540 system_pods.go:74] duration metric: took 179.984013ms to wait for pod list to return data ...
	I0725 18:31:45.337877   46540 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:31:45.534982   46540 default_sa.go:45] found service account: "default"
	I0725 18:31:45.535009   46540 default_sa.go:55] duration metric: took 197.126071ms for default service account to be created ...
	I0725 18:31:45.535017   46540 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:31:45.737315   46540 system_pods.go:86] 7 kube-system pods found
	I0725 18:31:45.737341   46540 system_pods.go:89] "coredns-6d4b75cb6d-w9jg4" [aca89fb1-320d-48a7-bafb-2420d8d0ac29] Running
	I0725 18:31:45.737347   46540 system_pods.go:89] "etcd-test-preload-062807" [294d9123-052b-40f9-9e8c-f54a83142250] Running
	I0725 18:31:45.737353   46540 system_pods.go:89] "kube-apiserver-test-preload-062807" [56ac8d42-ff83-4698-8fe4-7598e479fecf] Running
	I0725 18:31:45.737357   46540 system_pods.go:89] "kube-controller-manager-test-preload-062807" [7c9bf23e-6366-4a93-b70f-bfd2d468ce28] Running
	I0725 18:31:45.737364   46540 system_pods.go:89] "kube-proxy-v75mr" [b061a189-fa58-47b3-88a7-261eeb02f88b] Running
	I0725 18:31:45.737368   46540 system_pods.go:89] "kube-scheduler-test-preload-062807" [2d1a9ea2-3dfb-4d17-9679-39ed2db80ded] Running
	I0725 18:31:45.737371   46540 system_pods.go:89] "storage-provisioner" [af048abb-6a72-49ef-8942-cb7cc6c9aa68] Running
	I0725 18:31:45.737378   46540 system_pods.go:126] duration metric: took 202.355829ms to wait for k8s-apps to be running ...
	I0725 18:31:45.737385   46540 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:31:45.737432   46540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:31:45.751628   46540 system_svc.go:56] duration metric: took 14.233993ms WaitForService to wait for kubelet
	I0725 18:31:45.751658   46540 kubeadm.go:582] duration metric: took 12.417958015s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:31:45.751675   46540 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:31:45.937625   46540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:31:45.937654   46540 node_conditions.go:123] node cpu capacity is 2
	I0725 18:31:45.937665   46540 node_conditions.go:105] duration metric: took 185.984969ms to run NodePressure ...
	I0725 18:31:45.937679   46540 start.go:241] waiting for startup goroutines ...
	I0725 18:31:45.937687   46540 start.go:246] waiting for cluster config update ...
	I0725 18:31:45.937701   46540 start.go:255] writing updated cluster config ...
	I0725 18:31:45.937997   46540 ssh_runner.go:195] Run: rm -f paused
	I0725 18:31:45.982277   46540 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0725 18:31:45.984410   46540 out.go:177] 
	W0725 18:31:45.985790   46540 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0725 18:31:45.986968   46540 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0725 18:31:45.988112   46540 out.go:177] * Done! kubectl is now configured to use "test-preload-062807" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.809802012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932306809780381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64cb2aec-e2d7-42e2-95c7-6ca70513b974 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.810174070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c7029bc-12ad-4b9f-b2ba-e0706bd2edcd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.810222009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c7029bc-12ad-4b9f-b2ba-e0706bd2edcd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.810377539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:602fc786256e41bc7737bff968a0d43e12709521bec2c5ece5b2481e3f1d200a,PodSandboxId:3df846c7b46d82b50264d9114ff9d842c6193fbace6233e7120777498b0b50cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721932299720465304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w9jg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca89fb1-320d-48a7-bafb-2420d8d0ac29,},Annotations:map[string]string{io.kubernetes.container.hash: 81c5f0fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d88142494cf273d2c0813a05058fe17bd1f533c54f513600cb5b10eea0bffd7,PodSandboxId:5b2e2ace78645f8cd0e644509bee1e738c3c12b4033eef6ccc8ec598745a46e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721932292503074112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b061a189-fa58-47b3-88a7-261eeb02f88b,},Annotations:map[string]string{io.kubernetes.container.hash: c211ebdc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d13c3c26722c30f91100526926bdd23902dc757137773042e62b6867d5587f,PodSandboxId:56cfe80e7f8466a79d40a15e5eafd441dace82fe2d706e8d17b9b1b1dc44b1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721932292289987739,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
048abb-6a72-49ef-8942-cb7cc6c9aa68,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5b4df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362eda449e0e25afb66d35429969abcd964ce99f2083273e2b37eb0105fe34c7,PodSandboxId:c01658ed741520a75de466ade23659d34f3977ee4b97df23420bcce995baad24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721932286300392941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c36477f
93805e6b5698477120d954d,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662e8b2daf77addb8f8fca2dc69e900afb1434e97b273d3d25250969fb6e2c3,PodSandboxId:9b4f09e277022f8ac7a8780087500585efcef47e133b3d5d6ba31300086285e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721932286257175908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a825418bb7b5fa6081
47813a512c20,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913a55cf01d97c94d216262e14ebbb6e049c788d0f1dd04d24f0da550934fdb0,PodSandboxId:1e8ffae5f7eda66d09e0a810ceb11dca1965d00aaffcdbaa100e93d551655cbb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721932286246390802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da
250d2256ce092fbd2c1403776b6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679a15754b88d3b292f5736bdff43c9ad1742fefd3244e2c0ae5970b610f2024,PodSandboxId:172d011e8b68a6e6dae3265a9440e3ed24c8d1faec07033b59fda7e542c9d9d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721932286191837966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994e1d85e05721b21243686ba0f7dd2d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d9627243,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c7029bc-12ad-4b9f-b2ba-e0706bd2edcd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.843293167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=186a8707-889a-4722-ad14-91ecef55d5ab name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.843364920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=186a8707-889a-4722-ad14-91ecef55d5ab name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.844096021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfe6d75a-d9d7-4715-b4d2-cc43f436506d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.844509529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932306844489155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfe6d75a-d9d7-4715-b4d2-cc43f436506d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.845197728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e39eb617-d3d1-419d-afb0-825da9776983 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.845249978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e39eb617-d3d1-419d-afb0-825da9776983 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.845407040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:602fc786256e41bc7737bff968a0d43e12709521bec2c5ece5b2481e3f1d200a,PodSandboxId:3df846c7b46d82b50264d9114ff9d842c6193fbace6233e7120777498b0b50cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721932299720465304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w9jg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca89fb1-320d-48a7-bafb-2420d8d0ac29,},Annotations:map[string]string{io.kubernetes.container.hash: 81c5f0fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d88142494cf273d2c0813a05058fe17bd1f533c54f513600cb5b10eea0bffd7,PodSandboxId:5b2e2ace78645f8cd0e644509bee1e738c3c12b4033eef6ccc8ec598745a46e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721932292503074112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b061a189-fa58-47b3-88a7-261eeb02f88b,},Annotations:map[string]string{io.kubernetes.container.hash: c211ebdc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d13c3c26722c30f91100526926bdd23902dc757137773042e62b6867d5587f,PodSandboxId:56cfe80e7f8466a79d40a15e5eafd441dace82fe2d706e8d17b9b1b1dc44b1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721932292289987739,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
048abb-6a72-49ef-8942-cb7cc6c9aa68,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5b4df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362eda449e0e25afb66d35429969abcd964ce99f2083273e2b37eb0105fe34c7,PodSandboxId:c01658ed741520a75de466ade23659d34f3977ee4b97df23420bcce995baad24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721932286300392941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c36477f
93805e6b5698477120d954d,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662e8b2daf77addb8f8fca2dc69e900afb1434e97b273d3d25250969fb6e2c3,PodSandboxId:9b4f09e277022f8ac7a8780087500585efcef47e133b3d5d6ba31300086285e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721932286257175908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a825418bb7b5fa6081
47813a512c20,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913a55cf01d97c94d216262e14ebbb6e049c788d0f1dd04d24f0da550934fdb0,PodSandboxId:1e8ffae5f7eda66d09e0a810ceb11dca1965d00aaffcdbaa100e93d551655cbb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721932286246390802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da
250d2256ce092fbd2c1403776b6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679a15754b88d3b292f5736bdff43c9ad1742fefd3244e2c0ae5970b610f2024,PodSandboxId:172d011e8b68a6e6dae3265a9440e3ed24c8d1faec07033b59fda7e542c9d9d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721932286191837966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994e1d85e05721b21243686ba0f7dd2d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d9627243,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e39eb617-d3d1-419d-afb0-825da9776983 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.879902451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6ea8c48-2de6-4da7-9af5-321d6f82ee56 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.880060936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6ea8c48-2de6-4da7-9af5-321d6f82ee56 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.881383960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=186f5631-bb38-47df-8a3f-bca31dd2ae82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.881989182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932306881964804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=186f5631-bb38-47df-8a3f-bca31dd2ae82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.882384612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3efe8788-bf91-475f-978f-7001dfafff0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.882430811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3efe8788-bf91-475f-978f-7001dfafff0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.882608234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:602fc786256e41bc7737bff968a0d43e12709521bec2c5ece5b2481e3f1d200a,PodSandboxId:3df846c7b46d82b50264d9114ff9d842c6193fbace6233e7120777498b0b50cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721932299720465304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w9jg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca89fb1-320d-48a7-bafb-2420d8d0ac29,},Annotations:map[string]string{io.kubernetes.container.hash: 81c5f0fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d88142494cf273d2c0813a05058fe17bd1f533c54f513600cb5b10eea0bffd7,PodSandboxId:5b2e2ace78645f8cd0e644509bee1e738c3c12b4033eef6ccc8ec598745a46e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721932292503074112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b061a189-fa58-47b3-88a7-261eeb02f88b,},Annotations:map[string]string{io.kubernetes.container.hash: c211ebdc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d13c3c26722c30f91100526926bdd23902dc757137773042e62b6867d5587f,PodSandboxId:56cfe80e7f8466a79d40a15e5eafd441dace82fe2d706e8d17b9b1b1dc44b1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721932292289987739,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
048abb-6a72-49ef-8942-cb7cc6c9aa68,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5b4df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362eda449e0e25afb66d35429969abcd964ce99f2083273e2b37eb0105fe34c7,PodSandboxId:c01658ed741520a75de466ade23659d34f3977ee4b97df23420bcce995baad24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721932286300392941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c36477f
93805e6b5698477120d954d,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662e8b2daf77addb8f8fca2dc69e900afb1434e97b273d3d25250969fb6e2c3,PodSandboxId:9b4f09e277022f8ac7a8780087500585efcef47e133b3d5d6ba31300086285e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721932286257175908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a825418bb7b5fa6081
47813a512c20,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913a55cf01d97c94d216262e14ebbb6e049c788d0f1dd04d24f0da550934fdb0,PodSandboxId:1e8ffae5f7eda66d09e0a810ceb11dca1965d00aaffcdbaa100e93d551655cbb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721932286246390802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da
250d2256ce092fbd2c1403776b6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679a15754b88d3b292f5736bdff43c9ad1742fefd3244e2c0ae5970b610f2024,PodSandboxId:172d011e8b68a6e6dae3265a9440e3ed24c8d1faec07033b59fda7e542c9d9d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721932286191837966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994e1d85e05721b21243686ba0f7dd2d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d9627243,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3efe8788-bf91-475f-978f-7001dfafff0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.912428141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98886089-ba4c-4716-b802-14f5540ea8eb name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.912496255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98886089-ba4c-4716-b802-14f5540ea8eb name=/runtime.v1.RuntimeService/Version
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.913425090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c261501-d75d-4b42-88ac-c201b19ada83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.913876483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932306913855512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c261501-d75d-4b42-88ac-c201b19ada83 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.914342022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7460c0f-42e2-4171-8123-8f7090bf85e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.914501309Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7460c0f-42e2-4171-8123-8f7090bf85e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:31:46 test-preload-062807 crio[679]: time="2024-07-25 18:31:46.914765293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:602fc786256e41bc7737bff968a0d43e12709521bec2c5ece5b2481e3f1d200a,PodSandboxId:3df846c7b46d82b50264d9114ff9d842c6193fbace6233e7120777498b0b50cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721932299720465304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-w9jg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aca89fb1-320d-48a7-bafb-2420d8d0ac29,},Annotations:map[string]string{io.kubernetes.container.hash: 81c5f0fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d88142494cf273d2c0813a05058fe17bd1f533c54f513600cb5b10eea0bffd7,PodSandboxId:5b2e2ace78645f8cd0e644509bee1e738c3c12b4033eef6ccc8ec598745a46e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721932292503074112,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v75mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b061a189-fa58-47b3-88a7-261eeb02f88b,},Annotations:map[string]string{io.kubernetes.container.hash: c211ebdc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88d13c3c26722c30f91100526926bdd23902dc757137773042e62b6867d5587f,PodSandboxId:56cfe80e7f8466a79d40a15e5eafd441dace82fe2d706e8d17b9b1b1dc44b1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721932292289987739,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af
048abb-6a72-49ef-8942-cb7cc6c9aa68,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5b4df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362eda449e0e25afb66d35429969abcd964ce99f2083273e2b37eb0105fe34c7,PodSandboxId:c01658ed741520a75de466ade23659d34f3977ee4b97df23420bcce995baad24,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721932286300392941,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c36477f
93805e6b5698477120d954d,},Annotations:map[string]string{io.kubernetes.container.hash: 1b2391db,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7662e8b2daf77addb8f8fca2dc69e900afb1434e97b273d3d25250969fb6e2c3,PodSandboxId:9b4f09e277022f8ac7a8780087500585efcef47e133b3d5d6ba31300086285e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721932286257175908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0a825418bb7b5fa6081
47813a512c20,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:913a55cf01d97c94d216262e14ebbb6e049c788d0f1dd04d24f0da550934fdb0,PodSandboxId:1e8ffae5f7eda66d09e0a810ceb11dca1965d00aaffcdbaa100e93d551655cbb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721932286246390802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75da
250d2256ce092fbd2c1403776b6b,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:679a15754b88d3b292f5736bdff43c9ad1742fefd3244e2c0ae5970b610f2024,PodSandboxId:172d011e8b68a6e6dae3265a9440e3ed24c8d1faec07033b59fda7e542c9d9d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721932286191837966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-062807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 994e1d85e05721b21243686ba0f7dd2d,},Annotation
s:map[string]string{io.kubernetes.container.hash: d9627243,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7460c0f-42e2-4171-8123-8f7090bf85e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	602fc786256e4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   3df846c7b46d8       coredns-6d4b75cb6d-w9jg4
	4d88142494cf2       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   5b2e2ace78645       kube-proxy-v75mr
	88d13c3c26722       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   56cfe80e7f846       storage-provisioner
	362eda449e0e2       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   c01658ed74152       kube-apiserver-test-preload-062807
	7662e8b2daf77       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   9b4f09e277022       kube-scheduler-test-preload-062807
	913a55cf01d97       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   1e8ffae5f7eda       kube-controller-manager-test-preload-062807
	679a15754b88d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   172d011e8b68a       etcd-test-preload-062807
	
	
	==> coredns [602fc786256e41bc7737bff968a0d43e12709521bec2c5ece5b2481e3f1d200a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38485 - 24727 "HINFO IN 370832319317019188.2953264209612790033. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011655045s
	
	
	==> describe nodes <==
	Name:               test-preload-062807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-062807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=test-preload-062807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_30_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-062807
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:31:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:31:41 +0000   Thu, 25 Jul 2024 18:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:31:41 +0000   Thu, 25 Jul 2024 18:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:31:41 +0000   Thu, 25 Jul 2024 18:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:31:41 +0000   Thu, 25 Jul 2024 18:31:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    test-preload-062807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 79df84c99b6f41f2902d58ddb8be071d
	  System UUID:                79df84c9-9b6f-41f2-902d-58ddb8be071d
	  Boot ID:                    dc919b97-6ac0-42fc-8a98-c4fef3f30e78
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-w9jg4                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-test-preload-062807                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-test-preload-062807             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-test-preload-062807    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-v75mr                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-test-preload-062807             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 102s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  102s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s               kubelet          Node test-preload-062807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s               kubelet          Node test-preload-062807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s               kubelet          Node test-preload-062807 status is now: NodeHasSufficientPID
	  Normal  NodeReady                92s                kubelet          Node test-preload-062807 status is now: NodeReady
	  Normal  RegisteredNode           90s                node-controller  Node test-preload-062807 event: Registered Node test-preload-062807 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-062807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-062807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-062807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-062807 event: Registered Node test-preload-062807 in Controller
	
	
	==> dmesg <==
	[Jul25 18:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050388] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036022] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul25 18:31] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.837924] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.547697] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.955813] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.054526] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062514] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.165127] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.151793] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.253527] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[ +12.752436] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.052429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.580839] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +6.915066] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.174107] systemd-fstab-generator[1696]: Ignoring "noauto" option for root device
	[  +6.126926] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [679a15754b88d3b292f5736bdff43c9ad1742fefd3244e2c0ae5970b610f2024] <==
	{"level":"info","ts":"2024-07-25T18:31:26.490Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"28dd8e6bbca035f5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-25T18:31:26.491Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-25T18:31:26.511Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:31:26.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 switched to configuration voters=(2944666324747433461)"}
	{"level":"info","ts":"2024-07-25T18:31:26.512Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","added-peer-id":"28dd8e6bbca035f5","added-peer-peer-urls":["https://192.168.39.203:2380"]}
	{"level":"info","ts":"2024-07-25T18:31:26.512Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3b4a61fb6ca7242f","local-member-id":"28dd8e6bbca035f5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:31:26.512Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:31:26.516Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-07-25T18:31:26.516Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.203:2380"}
	{"level":"info","ts":"2024-07-25T18:31:26.523Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28dd8e6bbca035f5","initial-advertise-peer-urls":["https://192.168.39.203:2380"],"listen-peer-urls":["https://192.168.39.203:2380"],"advertise-client-urls":["https://192.168.39.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:31:26.523Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgPreVoteResp from 28dd8e6bbca035f5 at term 2"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 received MsgVoteResp from 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28dd8e6bbca035f5 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:31:28.267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28dd8e6bbca035f5 elected leader 28dd8e6bbca035f5 at term 3"}
	{"level":"info","ts":"2024-07-25T18:31:28.273Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"28dd8e6bbca035f5","local-member-attributes":"{Name:test-preload-062807 ClientURLs:[https://192.168.39.203:2379]}","request-path":"/0/members/28dd8e6bbca035f5/attributes","cluster-id":"3b4a61fb6ca7242f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:31:28.273Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:31:28.273Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:31:28.273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:31:28.273Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:31:28.274Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:31:28.275Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.203:2379"}
	
	
	==> kernel <==
	 18:31:47 up 0 min,  0 users,  load average: 0.77, 0.23, 0.08
	Linux test-preload-062807 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [362eda449e0e25afb66d35429969abcd964ce99f2083273e2b37eb0105fe34c7] <==
	I0725 18:31:30.679707       1 controller.go:85] Starting OpenAPI V3 controller
	I0725 18:31:30.680047       1 naming_controller.go:291] Starting NamingConditionController
	I0725 18:31:30.680116       1 establishing_controller.go:76] Starting EstablishingController
	I0725 18:31:30.680154       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0725 18:31:30.680183       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0725 18:31:30.680212       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0725 18:31:30.727201       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 18:31:30.728239       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:31:30.729237       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 18:31:30.735328       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:31:30.743620       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:31:30.757861       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:31:30.758810       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0725 18:31:30.770778       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0725 18:31:30.805099       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0725 18:31:31.328020       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 18:31:31.632403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:31:32.242050       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 18:31:32.256224       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 18:31:32.315180       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 18:31:32.339039       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:31:32.345367       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:31:32.741795       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0725 18:31:43.826037       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 18:31:43.967151       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [913a55cf01d97c94d216262e14ebbb6e049c788d0f1dd04d24f0da550934fdb0] <==
	I0725 18:31:43.824089       1 shared_informer.go:262] Caches are synced for PVC protection
	I0725 18:31:43.825252       1 shared_informer.go:262] Caches are synced for TTL
	I0725 18:31:43.852115       1 shared_informer.go:262] Caches are synced for taint
	I0725 18:31:43.852383       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0725 18:31:43.852389       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0725 18:31:43.852540       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-062807. Assuming now as a timestamp.
	I0725 18:31:43.852593       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0725 18:31:43.853151       1 event.go:294] "Event occurred" object="test-preload-062807" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-062807 event: Registered Node test-preload-062807 in Controller"
	I0725 18:31:43.860640       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0725 18:31:43.863388       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0725 18:31:43.909336       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0725 18:31:43.909394       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0725 18:31:43.909416       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0725 18:31:43.909530       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0725 18:31:43.930456       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0725 18:31:43.955972       1 shared_informer.go:262] Caches are synced for endpoint
	I0725 18:31:43.964786       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0725 18:31:43.985938       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 18:31:44.003548       1 shared_informer.go:262] Caches are synced for disruption
	I0725 18:31:44.003580       1 disruption.go:371] Sending events to api server.
	I0725 18:31:44.039194       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 18:31:44.058798       1 shared_informer.go:262] Caches are synced for deployment
	I0725 18:31:44.468230       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 18:31:44.521389       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 18:31:44.521428       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [4d88142494cf273d2c0813a05058fe17bd1f533c54f513600cb5b10eea0bffd7] <==
	I0725 18:31:32.692390       1 node.go:163] Successfully retrieved node IP: 192.168.39.203
	I0725 18:31:32.692460       1 server_others.go:138] "Detected node IP" address="192.168.39.203"
	I0725 18:31:32.692494       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 18:31:32.733638       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0725 18:31:32.733664       1 server_others.go:206] "Using iptables Proxier"
	I0725 18:31:32.734316       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 18:31:32.735152       1 server.go:661] "Version info" version="v1.24.4"
	I0725 18:31:32.735181       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:31:32.736873       1 config.go:317] "Starting service config controller"
	I0725 18:31:32.737066       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 18:31:32.737106       1 config.go:226] "Starting endpoint slice config controller"
	I0725 18:31:32.737123       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 18:31:32.738224       1 config.go:444] "Starting node config controller"
	I0725 18:31:32.738248       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 18:31:32.838090       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0725 18:31:32.838173       1 shared_informer.go:262] Caches are synced for service config
	I0725 18:31:32.838310       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [7662e8b2daf77addb8f8fca2dc69e900afb1434e97b273d3d25250969fb6e2c3] <==
	I0725 18:31:27.074934       1 serving.go:348] Generated self-signed cert in-memory
	W0725 18:31:30.690808       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:31:30.690877       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:31:30.690918       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:31:30.690932       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:31:30.751327       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0725 18:31:30.751359       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:31:30.760108       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0725 18:31:30.761160       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:31:30.761195       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:31:30.761990       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:31:30.862179       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:31:30 test-preload-062807 kubelet[1070]: I0725 18:31:30.785481    1070 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-062807"
	Jul 25 18:31:30 test-preload-062807 kubelet[1070]: I0725 18:31:30.789489    1070 setters.go:532] "Node became not ready" node="test-preload-062807" condition={Type:Ready Status:False LastHeartbeatTime:2024-07-25 18:31:30.789351869 +0000 UTC m=+5.401404033 LastTransitionTime:2024-07-25 18:31:30.789351869 +0000 UTC m=+5.401404033 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.510799    1070 apiserver.go:52] "Watching apiserver"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.514380    1070 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.514525    1070 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.514568    1070 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: E0725 18:31:31.518296    1070 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w9jg4" podUID=aca89fb1-320d-48a7-bafb-2420d8d0ac29
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.567440    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b061a189-fa58-47b3-88a7-261eeb02f88b-xtables-lock\") pod \"kube-proxy-v75mr\" (UID: \"b061a189-fa58-47b3-88a7-261eeb02f88b\") " pod="kube-system/kube-proxy-v75mr"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568067    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume\") pod \"coredns-6d4b75cb6d-w9jg4\" (UID: \"aca89fb1-320d-48a7-bafb-2420d8d0ac29\") " pod="kube-system/coredns-6d4b75cb6d-w9jg4"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568230    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6h5d\" (UniqueName: \"kubernetes.io/projected/aca89fb1-320d-48a7-bafb-2420d8d0ac29-kube-api-access-t6h5d\") pod \"coredns-6d4b75cb6d-w9jg4\" (UID: \"aca89fb1-320d-48a7-bafb-2420d8d0ac29\") " pod="kube-system/coredns-6d4b75cb6d-w9jg4"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568375    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndf9\" (UniqueName: \"kubernetes.io/projected/af048abb-6a72-49ef-8942-cb7cc6c9aa68-kube-api-access-5ndf9\") pod \"storage-provisioner\" (UID: \"af048abb-6a72-49ef-8942-cb7cc6c9aa68\") " pod="kube-system/storage-provisioner"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568521    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b061a189-fa58-47b3-88a7-261eeb02f88b-lib-modules\") pod \"kube-proxy-v75mr\" (UID: \"b061a189-fa58-47b3-88a7-261eeb02f88b\") " pod="kube-system/kube-proxy-v75mr"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568643    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af048abb-6a72-49ef-8942-cb7cc6c9aa68-tmp\") pod \"storage-provisioner\" (UID: \"af048abb-6a72-49ef-8942-cb7cc6c9aa68\") " pod="kube-system/storage-provisioner"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568783    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b061a189-fa58-47b3-88a7-261eeb02f88b-kube-proxy\") pod \"kube-proxy-v75mr\" (UID: \"b061a189-fa58-47b3-88a7-261eeb02f88b\") " pod="kube-system/kube-proxy-v75mr"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.568910    1070 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld68d\" (UniqueName: \"kubernetes.io/projected/b061a189-fa58-47b3-88a7-261eeb02f88b-kube-api-access-ld68d\") pod \"kube-proxy-v75mr\" (UID: \"b061a189-fa58-47b3-88a7-261eeb02f88b\") " pod="kube-system/kube-proxy-v75mr"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: I0725 18:31:31.569010    1070 reconciler.go:159] "Reconciler: start to sync state"
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: E0725 18:31:31.672557    1070 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 25 18:31:31 test-preload-062807 kubelet[1070]: E0725 18:31:31.672775    1070 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume podName:aca89fb1-320d-48a7-bafb-2420d8d0ac29 nodeName:}" failed. No retries permitted until 2024-07-25 18:31:32.172696313 +0000 UTC m=+6.784748492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume") pod "coredns-6d4b75cb6d-w9jg4" (UID: "aca89fb1-320d-48a7-bafb-2420d8d0ac29") : object "kube-system"/"coredns" not registered
	Jul 25 18:31:32 test-preload-062807 kubelet[1070]: E0725 18:31:32.175579    1070 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 25 18:31:32 test-preload-062807 kubelet[1070]: E0725 18:31:32.175641    1070 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume podName:aca89fb1-320d-48a7-bafb-2420d8d0ac29 nodeName:}" failed. No retries permitted until 2024-07-25 18:31:33.175626299 +0000 UTC m=+7.787678463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume") pod "coredns-6d4b75cb6d-w9jg4" (UID: "aca89fb1-320d-48a7-bafb-2420d8d0ac29") : object "kube-system"/"coredns" not registered
	Jul 25 18:31:33 test-preload-062807 kubelet[1070]: E0725 18:31:33.181572    1070 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 25 18:31:33 test-preload-062807 kubelet[1070]: E0725 18:31:33.182182    1070 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume podName:aca89fb1-320d-48a7-bafb-2420d8d0ac29 nodeName:}" failed. No retries permitted until 2024-07-25 18:31:35.182119341 +0000 UTC m=+9.794171519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume") pod "coredns-6d4b75cb6d-w9jg4" (UID: "aca89fb1-320d-48a7-bafb-2420d8d0ac29") : object "kube-system"/"coredns" not registered
	Jul 25 18:31:33 test-preload-062807 kubelet[1070]: E0725 18:31:33.603042    1070 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-w9jg4" podUID=aca89fb1-320d-48a7-bafb-2420d8d0ac29
	Jul 25 18:31:35 test-preload-062807 kubelet[1070]: E0725 18:31:35.200224    1070 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 25 18:31:35 test-preload-062807 kubelet[1070]: E0725 18:31:35.200715    1070 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume podName:aca89fb1-320d-48a7-bafb-2420d8d0ac29 nodeName:}" failed. No retries permitted until 2024-07-25 18:31:39.200644103 +0000 UTC m=+13.812696280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aca89fb1-320d-48a7-bafb-2420d8d0ac29-config-volume") pod "coredns-6d4b75cb6d-w9jg4" (UID: "aca89fb1-320d-48a7-bafb-2420d8d0ac29") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [88d13c3c26722c30f91100526926bdd23902dc757137773042e62b6867d5587f] <==
	I0725 18:31:32.397047       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-062807 -n test-preload-062807
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-062807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-062807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-062807
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-062807: (1.12633913s)
--- FAIL: TestPreload (181.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (447.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m47.961435789s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-069209" primary control-plane node in "kubernetes-upgrade-069209" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:33:39.335227   48054 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:33:39.335328   48054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:33:39.335336   48054 out.go:304] Setting ErrFile to fd 2...
	I0725 18:33:39.335341   48054 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:33:39.335517   48054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:33:39.336302   48054 out.go:298] Setting JSON to false
	I0725 18:33:39.337187   48054 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4563,"bootTime":1721927856,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:33:39.337242   48054 start.go:139] virtualization: kvm guest
	I0725 18:33:39.338983   48054 out.go:177] * [kubernetes-upgrade-069209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:33:39.340586   48054 notify.go:220] Checking for updates...
	I0725 18:33:39.341248   48054 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:33:39.343864   48054 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:33:39.346210   48054 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:33:39.348360   48054 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:33:39.350505   48054 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:33:39.353074   48054 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:33:39.354547   48054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:33:39.391029   48054 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 18:33:39.392397   48054 start.go:297] selected driver: kvm2
	I0725 18:33:39.392418   48054 start.go:901] validating driver "kvm2" against <nil>
	I0725 18:33:39.392454   48054 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:33:39.393421   48054 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:33:39.408845   48054 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:33:39.424770   48054 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:33:39.424817   48054 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:33:39.425067   48054 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 18:33:39.425094   48054 cni.go:84] Creating CNI manager for ""
	I0725 18:33:39.425102   48054 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:33:39.425109   48054 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 18:33:39.425182   48054 start.go:340] cluster config:
	{Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:33:39.425297   48054 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:33:39.426966   48054 out.go:177] * Starting "kubernetes-upgrade-069209" primary control-plane node in "kubernetes-upgrade-069209" cluster
	I0725 18:33:39.428157   48054 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:33:39.428199   48054 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0725 18:33:39.428209   48054 cache.go:56] Caching tarball of preloaded images
	I0725 18:33:39.428298   48054 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:33:39.428336   48054 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0725 18:33:39.428793   48054 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/config.json ...
	I0725 18:33:39.428827   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/config.json: {Name:mk2d1789174d3bfc2785a93ae95fd89679f8d84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:33:39.429000   48054 start.go:360] acquireMachinesLock for kubernetes-upgrade-069209: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:34:00.612919   48054 start.go:364] duration metric: took 21.183887355s to acquireMachinesLock for "kubernetes-upgrade-069209"
	I0725 18:34:00.613008   48054 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:34:00.613127   48054 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 18:34:00.615095   48054 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 18:34:00.615331   48054 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:34:00.615373   48054 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:34:00.632270   48054 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0725 18:34:00.632728   48054 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:34:00.633339   48054 main.go:141] libmachine: Using API Version  1
	I0725 18:34:00.633364   48054 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:34:00.633693   48054 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:34:00.633897   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:34:00.634027   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:00.634181   48054 start.go:159] libmachine.API.Create for "kubernetes-upgrade-069209" (driver="kvm2")
	I0725 18:34:00.634211   48054 client.go:168] LocalClient.Create starting
	I0725 18:34:00.634245   48054 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 18:34:00.634286   48054 main.go:141] libmachine: Decoding PEM data...
	I0725 18:34:00.634314   48054 main.go:141] libmachine: Parsing certificate...
	I0725 18:34:00.634383   48054 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 18:34:00.634408   48054 main.go:141] libmachine: Decoding PEM data...
	I0725 18:34:00.634424   48054 main.go:141] libmachine: Parsing certificate...
	I0725 18:34:00.634446   48054 main.go:141] libmachine: Running pre-create checks...
	I0725 18:34:00.634461   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .PreCreateCheck
	I0725 18:34:00.634802   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetConfigRaw
	I0725 18:34:00.635236   48054 main.go:141] libmachine: Creating machine...
	I0725 18:34:00.635251   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .Create
	I0725 18:34:00.635415   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Creating KVM machine...
	I0725 18:34:00.636529   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found existing default KVM network
	I0725 18:34:00.637500   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:00.637322   48355 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b8:e7:59} reservation:<nil>}
	I0725 18:34:00.638192   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:00.638135   48355 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0725 18:34:00.638222   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | created network xml: 
	I0725 18:34:00.638241   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | <network>
	I0725 18:34:00.638273   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   <name>mk-kubernetes-upgrade-069209</name>
	I0725 18:34:00.638292   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   <dns enable='no'/>
	I0725 18:34:00.638302   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   
	I0725 18:34:00.638313   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0725 18:34:00.638340   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |     <dhcp>
	I0725 18:34:00.638363   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0725 18:34:00.638377   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |     </dhcp>
	I0725 18:34:00.638384   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   </ip>
	I0725 18:34:00.638391   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG |   
	I0725 18:34:00.638398   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | </network>
	I0725 18:34:00.638410   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | 
	I0725 18:34:00.644162   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | trying to create private KVM network mk-kubernetes-upgrade-069209 192.168.50.0/24...
	I0725 18:34:00.716389   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | private KVM network mk-kubernetes-upgrade-069209 192.168.50.0/24 created
	I0725 18:34:00.716424   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:00.716370   48355 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:34:00.716439   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209 ...
	I0725 18:34:00.716455   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 18:34:00.716542   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 18:34:00.957383   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:00.957251   48355 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa...
	I0725 18:34:01.265561   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:01.265441   48355 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/kubernetes-upgrade-069209.rawdisk...
	I0725 18:34:01.265609   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Writing magic tar header
	I0725 18:34:01.265626   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Writing SSH key tar header
	I0725 18:34:01.265644   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:01.265561   48355 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209 ...
	I0725 18:34:01.265715   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209
	I0725 18:34:01.265755   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 18:34:01.265775   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209 (perms=drwx------)
	I0725 18:34:01.265793   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 18:34:01.265808   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:34:01.265819   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 18:34:01.265859   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 18:34:01.265878   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 18:34:01.265887   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 18:34:01.265900   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home/jenkins
	I0725 18:34:01.265912   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Checking permissions on dir: /home
	I0725 18:34:01.265925   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Skipping /home - not owner
	I0725 18:34:01.265944   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 18:34:01.265961   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 18:34:01.265975   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Creating domain...
	I0725 18:34:01.267023   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) define libvirt domain using xml: 
	I0725 18:34:01.267045   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) <domain type='kvm'>
	I0725 18:34:01.267056   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <name>kubernetes-upgrade-069209</name>
	I0725 18:34:01.267066   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <memory unit='MiB'>2200</memory>
	I0725 18:34:01.267076   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <vcpu>2</vcpu>
	I0725 18:34:01.267087   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <features>
	I0725 18:34:01.267096   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <acpi/>
	I0725 18:34:01.267104   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <apic/>
	I0725 18:34:01.267125   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <pae/>
	I0725 18:34:01.267135   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     
	I0725 18:34:01.267162   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   </features>
	I0725 18:34:01.267185   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <cpu mode='host-passthrough'>
	I0725 18:34:01.267193   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   
	I0725 18:34:01.267201   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   </cpu>
	I0725 18:34:01.267209   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <os>
	I0725 18:34:01.267217   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <type>hvm</type>
	I0725 18:34:01.267226   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <boot dev='cdrom'/>
	I0725 18:34:01.267232   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <boot dev='hd'/>
	I0725 18:34:01.267237   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <bootmenu enable='no'/>
	I0725 18:34:01.267243   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   </os>
	I0725 18:34:01.267252   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   <devices>
	I0725 18:34:01.267265   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <disk type='file' device='cdrom'>
	I0725 18:34:01.267293   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/boot2docker.iso'/>
	I0725 18:34:01.267302   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <target dev='hdc' bus='scsi'/>
	I0725 18:34:01.267312   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <readonly/>
	I0725 18:34:01.267319   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </disk>
	I0725 18:34:01.267327   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <disk type='file' device='disk'>
	I0725 18:34:01.267337   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 18:34:01.267367   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/kubernetes-upgrade-069209.rawdisk'/>
	I0725 18:34:01.267409   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <target dev='hda' bus='virtio'/>
	I0725 18:34:01.267427   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </disk>
	I0725 18:34:01.267437   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <interface type='network'>
	I0725 18:34:01.267452   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <source network='mk-kubernetes-upgrade-069209'/>
	I0725 18:34:01.267461   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <model type='virtio'/>
	I0725 18:34:01.267474   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </interface>
	I0725 18:34:01.267493   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <interface type='network'>
	I0725 18:34:01.267511   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <source network='default'/>
	I0725 18:34:01.267522   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <model type='virtio'/>
	I0725 18:34:01.267533   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </interface>
	I0725 18:34:01.267545   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <serial type='pty'>
	I0725 18:34:01.267561   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <target port='0'/>
	I0725 18:34:01.267573   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </serial>
	I0725 18:34:01.267585   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <console type='pty'>
	I0725 18:34:01.267602   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <target type='serial' port='0'/>
	I0725 18:34:01.267617   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </console>
	I0725 18:34:01.267630   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     <rng model='virtio'>
	I0725 18:34:01.267638   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)       <backend model='random'>/dev/random</backend>
	I0725 18:34:01.267648   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     </rng>
	I0725 18:34:01.267658   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     
	I0725 18:34:01.267667   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)     
	I0725 18:34:01.267677   48054 main.go:141] libmachine: (kubernetes-upgrade-069209)   </devices>
	I0725 18:34:01.267685   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) </domain>
	I0725 18:34:01.267697   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) 
	I0725 18:34:01.272058   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:06:0f:8e in network default
	I0725 18:34:01.272708   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Ensuring networks are active...
	I0725 18:34:01.272732   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:01.273494   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Ensuring network default is active
	I0725 18:34:01.273854   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Ensuring network mk-kubernetes-upgrade-069209 is active
	I0725 18:34:01.274437   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Getting domain xml...
	I0725 18:34:01.275138   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Creating domain...
	I0725 18:34:02.569649   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Waiting to get IP...
	I0725 18:34:02.570509   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:02.571100   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:02.571128   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:02.571009   48355 retry.go:31] will retry after 240.731976ms: waiting for machine to come up
	I0725 18:34:02.813732   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:02.814278   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:02.814309   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:02.814227   48355 retry.go:31] will retry after 314.315666ms: waiting for machine to come up
	I0725 18:34:03.130625   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:03.131106   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:03.131126   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:03.131061   48355 retry.go:31] will retry after 430.880157ms: waiting for machine to come up
	I0725 18:34:03.563833   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:03.564265   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:03.564291   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:03.564231   48355 retry.go:31] will retry after 555.076589ms: waiting for machine to come up
	I0725 18:34:04.120897   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:04.121432   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:04.121462   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:04.121392   48355 retry.go:31] will retry after 517.944813ms: waiting for machine to come up
	I0725 18:34:04.641088   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:04.641456   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:04.641486   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:04.641417   48355 retry.go:31] will retry after 891.768821ms: waiting for machine to come up
	I0725 18:34:05.535279   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:05.535896   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:05.535926   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:05.535834   48355 retry.go:31] will retry after 739.265666ms: waiting for machine to come up
	I0725 18:34:06.276872   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:06.277360   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:06.277425   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:06.277324   48355 retry.go:31] will retry after 1.332487649s: waiting for machine to come up
	I0725 18:34:07.611232   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:07.611764   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:07.611798   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:07.611714   48355 retry.go:31] will retry after 1.705688971s: waiting for machine to come up
	I0725 18:34:09.318640   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:09.319164   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:09.319193   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:09.319118   48355 retry.go:31] will retry after 1.651525137s: waiting for machine to come up
	I0725 18:34:10.971969   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:10.972387   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:10.972419   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:10.972355   48355 retry.go:31] will retry after 2.423626224s: waiting for machine to come up
	I0725 18:34:13.398365   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:13.398896   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:13.398942   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:13.398840   48355 retry.go:31] will retry after 2.577172189s: waiting for machine to come up
	I0725 18:34:15.977976   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:15.978519   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find current IP address of domain kubernetes-upgrade-069209 in network mk-kubernetes-upgrade-069209
	I0725 18:34:15.978547   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | I0725 18:34:15.978476   48355 retry.go:31] will retry after 4.517215327s: waiting for machine to come up
	I0725 18:34:20.498380   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.498876   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Found IP for machine: 192.168.50.165
	I0725 18:34:20.498923   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has current primary IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.498939   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Reserving static IP address...
	I0725 18:34:20.499255   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-069209", mac: "52:54:00:33:50:c6", ip: "192.168.50.165"} in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.573540   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Getting to WaitForSSH function...
	I0725 18:34:20.573567   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Reserved static IP address: 192.168.50.165
	I0725 18:34:20.573577   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Waiting for SSH to be available...
	I0725 18:34:20.576348   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.576688   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:20.576720   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.576843   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Using SSH client type: external
	I0725 18:34:20.576884   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa (-rw-------)
	I0725 18:34:20.576924   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:34:20.576943   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | About to run SSH command:
	I0725 18:34:20.576961   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | exit 0
	I0725 18:34:20.699974   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | SSH cmd err, output: <nil>: 
	I0725 18:34:20.700249   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) KVM machine creation complete!
	I0725 18:34:20.700684   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetConfigRaw
	I0725 18:34:20.701199   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:20.701446   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:20.701591   48054 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:34:20.701616   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetState
	I0725 18:34:20.702855   48054 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:34:20.702868   48054 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:34:20.702875   48054 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:34:20.702880   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:20.705231   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.705572   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:20.705600   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.705775   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:20.705935   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.706065   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.706238   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:20.706424   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:20.706607   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:20.706617   48054 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:34:20.807441   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:34:20.807469   48054 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:34:20.807480   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:20.810222   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.810554   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:20.810591   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.810720   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:20.810921   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.811086   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.811210   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:20.811343   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:20.811508   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:20.811519   48054 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:34:20.916628   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:34:20.916750   48054 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:34:20.916764   48054 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:34:20.916776   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:34:20.917033   48054 buildroot.go:166] provisioning hostname "kubernetes-upgrade-069209"
	I0725 18:34:20.917061   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:34:20.917277   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:20.920258   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.920681   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:20.920709   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:20.920866   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:20.921047   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.921259   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:20.921444   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:20.921653   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:20.921832   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:20.921848   48054 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-069209 && echo "kubernetes-upgrade-069209" | sudo tee /etc/hostname
	I0725 18:34:21.038546   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-069209
	
	I0725 18:34:21.038578   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.041413   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.041754   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.041785   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.041938   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.042133   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.042309   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.042465   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.042643   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:21.042806   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:21.042822   48054 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-069209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-069209/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-069209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:34:21.152389   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:34:21.152426   48054 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:34:21.152466   48054 buildroot.go:174] setting up certificates
	I0725 18:34:21.152477   48054 provision.go:84] configureAuth start
	I0725 18:34:21.152487   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:34:21.152776   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:34:21.155700   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.156033   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.156061   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.156236   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.158658   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.159024   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.159048   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.159196   48054 provision.go:143] copyHostCerts
	I0725 18:34:21.159252   48054 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:34:21.159265   48054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:34:21.159326   48054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:34:21.159441   48054 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:34:21.159455   48054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:34:21.159479   48054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:34:21.159544   48054 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:34:21.159560   48054 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:34:21.159580   48054 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:34:21.159636   48054 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-069209 san=[127.0.0.1 192.168.50.165 kubernetes-upgrade-069209 localhost minikube]
	I0725 18:34:21.260485   48054 provision.go:177] copyRemoteCerts
	I0725 18:34:21.260559   48054 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:34:21.260582   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.263157   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.263552   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.263588   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.263791   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.263976   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.264158   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.264347   48054 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:34:21.346197   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:34:21.369264   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 18:34:21.391748   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:34:21.414495   48054 provision.go:87] duration metric: took 262.006518ms to configureAuth
	I0725 18:34:21.414526   48054 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:34:21.414720   48054 config.go:182] Loaded profile config "kubernetes-upgrade-069209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:34:21.414798   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.417539   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.417881   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.417919   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.418058   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.418260   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.418456   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.418592   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.418823   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:21.418994   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:21.419010   48054 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:34:21.671558   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:34:21.671600   48054 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:34:21.671611   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetURL
	I0725 18:34:21.672960   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | Using libvirt version 6000000
	I0725 18:34:21.675170   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.675616   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.675641   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.675884   48054 main.go:141] libmachine: Docker is up and running!
	I0725 18:34:21.675902   48054 main.go:141] libmachine: Reticulating splines...
	I0725 18:34:21.675909   48054 client.go:171] duration metric: took 21.041687671s to LocalClient.Create
	I0725 18:34:21.675931   48054 start.go:167] duration metric: took 21.041764199s to libmachine.API.Create "kubernetes-upgrade-069209"
	I0725 18:34:21.675953   48054 start.go:293] postStartSetup for "kubernetes-upgrade-069209" (driver="kvm2")
	I0725 18:34:21.675969   48054 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:34:21.675990   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:21.676208   48054 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:34:21.676229   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.678493   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.678863   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.678907   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.679037   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.679219   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.679378   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.679525   48054 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:34:21.769182   48054 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:34:21.773231   48054 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:34:21.773260   48054 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:34:21.773334   48054 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:34:21.773429   48054 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:34:21.773553   48054 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:34:21.783952   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:34:21.807353   48054 start.go:296] duration metric: took 131.383063ms for postStartSetup
	I0725 18:34:21.807407   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetConfigRaw
	I0725 18:34:21.808037   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:34:21.810508   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.810829   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.810856   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.811122   48054 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/config.json ...
	I0725 18:34:21.811346   48054 start.go:128] duration metric: took 21.198170517s to createHost
	I0725 18:34:21.811382   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.813634   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.814125   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.814162   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.814311   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.814487   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.814635   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.814736   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.814944   48054 main.go:141] libmachine: Using SSH client type: native
	I0725 18:34:21.815123   48054 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:34:21.815136   48054 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 18:34:21.920717   48054 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932461.888114749
	
	I0725 18:34:21.920745   48054 fix.go:216] guest clock: 1721932461.888114749
	I0725 18:34:21.920755   48054 fix.go:229] Guest: 2024-07-25 18:34:21.888114749 +0000 UTC Remote: 2024-07-25 18:34:21.811361689 +0000 UTC m=+42.526185625 (delta=76.75306ms)
	I0725 18:34:21.920779   48054 fix.go:200] guest clock delta is within tolerance: 76.75306ms
	I0725 18:34:21.920783   48054 start.go:83] releasing machines lock for "kubernetes-upgrade-069209", held for 21.307809986s
	I0725 18:34:21.920819   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:21.921092   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:34:21.924379   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.924799   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.924823   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.925044   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:21.925593   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:21.925776   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:34:21.925856   48054 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:34:21.925890   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.926008   48054 ssh_runner.go:195] Run: cat /version.json
	I0725 18:34:21.926049   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:34:21.928791   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.928868   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.929170   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.929205   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.929269   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:21.929302   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:21.929336   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.929508   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:34:21.929527   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.929688   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:34:21.929692   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.929843   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:34:21.929880   48054 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:34:21.929956   48054 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:34:22.046679   48054 ssh_runner.go:195] Run: systemctl --version
	I0725 18:34:22.055518   48054 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:34:22.220923   48054 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:34:22.226994   48054 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:34:22.227115   48054 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:34:22.242448   48054 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:34:22.242472   48054 start.go:495] detecting cgroup driver to use...
	I0725 18:34:22.242533   48054 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:34:22.257896   48054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:34:22.272838   48054 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:34:22.272898   48054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:34:22.285525   48054 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:34:22.297815   48054 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:34:22.412511   48054 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:34:22.545224   48054 docker.go:233] disabling docker service ...
	I0725 18:34:22.545306   48054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:34:22.560561   48054 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:34:22.576183   48054 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:34:22.712183   48054 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:34:22.827603   48054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:34:22.842123   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:34:22.859875   48054 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:34:22.859940   48054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:34:22.870280   48054 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:34:22.870361   48054 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:34:22.880188   48054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:34:22.889929   48054 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:34:22.899579   48054 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:34:22.909027   48054 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:34:22.917414   48054 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:34:22.917459   48054 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:34:22.929813   48054 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:34:22.938407   48054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:34:23.070732   48054 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:34:23.210435   48054 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:34:23.210495   48054 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:34:23.215228   48054 start.go:563] Will wait 60s for crictl version
	I0725 18:34:23.215291   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:23.219166   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:34:23.261900   48054 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:34:23.261983   48054 ssh_runner.go:195] Run: crio --version
	I0725 18:34:23.293864   48054 ssh_runner.go:195] Run: crio --version
	I0725 18:34:23.465532   48054 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:34:23.487375   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:34:23.490854   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:23.491210   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:34:23.491243   48054 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:34:23.491516   48054 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:34:23.495683   48054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:34:23.508259   48054 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:34:23.508420   48054 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:34:23.508492   48054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:34:23.546274   48054 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:34:23.546378   48054 ssh_runner.go:195] Run: which lz4
	I0725 18:34:23.550370   48054 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0725 18:34:23.555133   48054 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:34:23.555177   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:34:25.060012   48054 crio.go:462] duration metric: took 1.509669911s to copy over tarball
	I0725 18:34:25.060104   48054 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:34:27.694111   48054 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.633965132s)
	I0725 18:34:27.694145   48054 crio.go:469] duration metric: took 2.634093964s to extract the tarball
	I0725 18:34:27.694155   48054 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:34:27.738013   48054 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:34:27.778689   48054 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:34:27.778717   48054 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:34:27.778807   48054 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:34:27.778836   48054 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:34:27.778779   48054 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:34:27.778780   48054 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:34:27.778949   48054 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:34:27.778893   48054 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:34:27.778893   48054 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:34:27.778852   48054 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:34:27.780300   48054 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:34:27.780312   48054 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:34:27.780317   48054 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:34:27.780310   48054 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:34:27.780317   48054 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:34:27.780395   48054 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:34:27.780459   48054 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:34:27.780508   48054 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:34:27.998340   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:34:28.011192   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:34:28.014100   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:34:28.019380   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:34:28.021535   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:34:28.022590   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:34:28.067262   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:34:28.095715   48054 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:34:28.095775   48054 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:34:28.095831   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.137574   48054 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:34:28.137629   48054 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:34:28.137677   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.168612   48054 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:34:28.168657   48054 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:34:28.168654   48054 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:34:28.168659   48054 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:34:28.168705   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.168726   48054 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:34:28.168622   48054 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:34:28.168782   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.168779   48054 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:34:28.168839   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.168692   48054 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:34:28.168897   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.191535   48054 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:34:28.191572   48054 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:34:28.191598   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:34:28.191607   48054 ssh_runner.go:195] Run: which crictl
	I0725 18:34:28.191598   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:34:28.191662   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:34:28.191688   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:34:28.191697   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:34:28.191667   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:34:28.315865   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:34:28.315950   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:34:28.316013   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:34:28.316135   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:34:28.317373   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:34:28.317405   48054 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:34:28.317434   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:34:28.352547   48054 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:34:28.638891   48054 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:34:28.779290   48054 cache_images.go:92] duration metric: took 1.000556399s to LoadCachedImages
	W0725 18:34:28.779376   48054 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0725 18:34:28.779391   48054 kubeadm.go:934] updating node { 192.168.50.165 8443 v1.20.0 crio true true} ...
	I0725 18:34:28.779516   48054 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-069209 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:34:28.779602   48054 ssh_runner.go:195] Run: crio config
	I0725 18:34:28.832466   48054 cni.go:84] Creating CNI manager for ""
	I0725 18:34:28.832502   48054 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:34:28.832518   48054 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:34:28.832544   48054 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.165 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-069209 NodeName:kubernetes-upgrade-069209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:34:28.832745   48054 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-069209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:34:28.832828   48054 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:34:28.842599   48054 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:34:28.842671   48054 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:34:28.852217   48054 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0725 18:34:28.870121   48054 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:34:28.888571   48054 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0725 18:34:28.905199   48054 ssh_runner.go:195] Run: grep 192.168.50.165	control-plane.minikube.internal$ /etc/hosts
	I0725 18:34:28.908854   48054 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:34:28.920482   48054 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:34:29.044799   48054 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:34:29.066494   48054 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209 for IP: 192.168.50.165
	I0725 18:34:29.066521   48054 certs.go:194] generating shared ca certs ...
	I0725 18:34:29.066542   48054 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.066726   48054 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:34:29.066790   48054 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:34:29.066802   48054 certs.go:256] generating profile certs ...
	I0725 18:34:29.066874   48054 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.key
	I0725 18:34:29.066890   48054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.crt with IP's: []
	I0725 18:34:29.204188   48054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.crt ...
	I0725 18:34:29.204217   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.crt: {Name:mkc6bd34bd51e689ba422f65b890ae74cfce7c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.204434   48054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.key ...
	I0725 18:34:29.204460   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.key: {Name:mkff457afb2d59d10e41cc4dde385e444341f6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.204594   48054 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key.ade831bd
	I0725 18:34:29.204620   48054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt.ade831bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.165]
	I0725 18:34:29.546879   48054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt.ade831bd ...
	I0725 18:34:29.546919   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt.ade831bd: {Name:mk1e42b221eb395017ad4be0976a1aa9e2d8c017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.547110   48054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key.ade831bd ...
	I0725 18:34:29.547135   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key.ade831bd: {Name:mk4027b3d382f0698d8e905747a66328c442c1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.547255   48054 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt.ade831bd -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt
	I0725 18:34:29.547375   48054 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key.ade831bd -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key
	I0725 18:34:29.547459   48054 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key
	I0725 18:34:29.547481   48054 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.crt with IP's: []
	I0725 18:34:29.728911   48054 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.crt ...
	I0725 18:34:29.728939   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.crt: {Name:mkc6c2671beb71459db136d97842332936b04081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.729119   48054 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key ...
	I0725 18:34:29.729136   48054 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key: {Name:mk569c3af25ab2b16ff762f37712f1b0d47364db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:34:29.729328   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:34:29.729377   48054 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:34:29.729391   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:34:29.729429   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:34:29.729463   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:34:29.729494   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:34:29.729548   48054 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:34:29.730093   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:34:29.755966   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:34:29.778686   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:34:29.802001   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:34:29.826177   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 18:34:29.954831   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:34:29.983507   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:34:30.005347   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:34:30.034172   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:34:30.062605   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:34:30.091825   48054 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:34:30.117105   48054 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:34:30.135588   48054 ssh_runner.go:195] Run: openssl version
	I0725 18:34:30.141088   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:34:30.151543   48054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:34:30.155974   48054 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:34:30.156020   48054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:34:30.161562   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:34:30.173263   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:34:30.184692   48054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:34:30.188844   48054 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:34:30.188899   48054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:34:30.195697   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:34:30.207179   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:34:30.217979   48054 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:34:30.222083   48054 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:34:30.222144   48054 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:34:30.227424   48054 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:34:30.238120   48054 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:34:30.241778   48054 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:34:30.241835   48054 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:34:30.241906   48054 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:34:30.241979   48054 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:34:30.278688   48054 cri.go:89] found id: ""
	I0725 18:34:30.278775   48054 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:34:30.289881   48054 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:34:30.299496   48054 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:34:30.309337   48054 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:34:30.309361   48054 kubeadm.go:157] found existing configuration files:
	
	I0725 18:34:30.309417   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:34:30.318581   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:34:30.318649   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:34:30.328073   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:34:30.337263   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:34:30.337335   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:34:30.347076   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:34:30.356537   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:34:30.356610   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:34:30.367504   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:34:30.377043   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:34:30.377129   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:34:30.386903   48054 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:34:30.513953   48054 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:34:30.514069   48054 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:34:30.673383   48054 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:34:30.673539   48054 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:34:30.673658   48054 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:34:30.869677   48054 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:34:31.039351   48054 out.go:204]   - Generating certificates and keys ...
	I0725 18:34:31.039485   48054 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:34:31.039615   48054 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:34:31.039736   48054 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:34:31.098093   48054 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:34:31.170019   48054 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:34:31.300875   48054 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:34:31.450229   48054 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:34:31.450382   48054 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	I0725 18:34:31.659249   48054 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:34:31.659497   48054 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	I0725 18:34:31.800592   48054 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:34:32.082884   48054 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:34:32.430496   48054 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:34:32.430884   48054 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:34:32.588916   48054 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:34:32.802947   48054 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:34:32.866611   48054 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:34:33.009616   48054 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:34:33.027804   48054 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:34:33.028900   48054 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:34:33.029012   48054 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:34:33.185194   48054 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:34:33.187182   48054 out.go:204]   - Booting up control plane ...
	I0725 18:34:33.187307   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:34:33.196766   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:34:33.196897   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:34:33.197930   48054 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:34:33.206011   48054 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:35:13.196538   48054 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:35:13.196650   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:35:13.196873   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:35:18.197181   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:35:18.197432   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:35:28.196730   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:35:28.196940   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:35:48.195967   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:35:48.196266   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:36:28.197385   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:36:28.197586   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:36:28.197606   48054 kubeadm.go:310] 
	I0725 18:36:28.197697   48054 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:36:28.197780   48054 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:36:28.197792   48054 kubeadm.go:310] 
	I0725 18:36:28.197844   48054 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:36:28.197896   48054 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:36:28.198015   48054 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:36:28.198026   48054 kubeadm.go:310] 
	I0725 18:36:28.198147   48054 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:36:28.198223   48054 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:36:28.198275   48054 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:36:28.198286   48054 kubeadm.go:310] 
	I0725 18:36:28.198450   48054 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:36:28.198563   48054 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:36:28.198578   48054 kubeadm.go:310] 
	I0725 18:36:28.198726   48054 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:36:28.198847   48054 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:36:28.198973   48054 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:36:28.199075   48054 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:36:28.199101   48054 kubeadm.go:310] 
	I0725 18:36:28.199262   48054 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:36:28.199397   48054 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:36:28.199512   48054 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:36:28.199631   48054 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-069209 localhost] and IPs [192.168.50.165 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:36:28.199692   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:36:30.270909   48054 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.071183675s)
	I0725 18:36:30.270999   48054 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:36:30.285978   48054 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:36:30.296589   48054 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:36:30.296609   48054 kubeadm.go:157] found existing configuration files:
	
	I0725 18:36:30.296664   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:36:30.305651   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:36:30.305717   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:36:30.315275   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:36:30.324317   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:36:30.324401   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:36:30.334623   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:36:30.344348   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:36:30.344405   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:36:30.354236   48054 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:36:30.362769   48054 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:36:30.362824   48054 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:36:30.371918   48054 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:36:30.447124   48054 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:36:30.447178   48054 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:36:30.600724   48054 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:36:30.600995   48054 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:36:30.601151   48054 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:36:30.806983   48054 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:36:30.809830   48054 out.go:204]   - Generating certificates and keys ...
	I0725 18:36:30.809958   48054 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:36:30.810045   48054 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:36:30.810164   48054 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:36:30.810284   48054 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:36:30.810389   48054 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:36:30.810470   48054 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:36:30.810596   48054 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:36:30.810683   48054 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:36:30.810800   48054 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:36:30.810920   48054 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:36:30.810985   48054 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:36:30.811079   48054 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:36:30.978272   48054 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:36:31.267552   48054 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:36:31.357360   48054 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:36:31.457096   48054 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:36:31.475080   48054 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:36:31.476430   48054 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:36:31.476515   48054 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:36:31.622402   48054 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:36:31.624545   48054 out.go:204]   - Booting up control plane ...
	I0725 18:36:31.624684   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:36:31.640010   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:36:31.643024   48054 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:36:31.644486   48054 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:36:31.647838   48054 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:37:11.650314   48054 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:37:11.650611   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:11.650982   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:16.651682   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:16.651991   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:26.652534   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:26.652830   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:46.651787   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:46.652100   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:38:26.651090   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:38:26.651345   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:38:26.651363   48054 kubeadm.go:310] 
	I0725 18:38:26.651432   48054 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:38:26.651492   48054 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:38:26.651508   48054 kubeadm.go:310] 
	I0725 18:38:26.651541   48054 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:38:26.651605   48054 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:38:26.651691   48054 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:38:26.651702   48054 kubeadm.go:310] 
	I0725 18:38:26.651790   48054 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:38:26.651832   48054 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:38:26.651868   48054 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:38:26.651877   48054 kubeadm.go:310] 
	I0725 18:38:26.652008   48054 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:38:26.652086   48054 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:38:26.652098   48054 kubeadm.go:310] 
	I0725 18:38:26.652191   48054 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:38:26.652273   48054 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:38:26.652352   48054 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:38:26.652435   48054 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:38:26.652461   48054 kubeadm.go:310] 
	I0725 18:38:26.653238   48054 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:38:26.653361   48054 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:38:26.653509   48054 kubeadm.go:394] duration metric: took 3m56.41167832s to StartCluster
	I0725 18:38:26.653524   48054 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:38:26.653577   48054 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:38:26.653627   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:38:26.704013   48054 cri.go:89] found id: ""
	I0725 18:38:26.704037   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.704047   48054 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:38:26.704054   48054 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:38:26.704115   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:38:26.748944   48054 cri.go:89] found id: ""
	I0725 18:38:26.748967   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.748974   48054 logs.go:278] No container was found matching "etcd"
	I0725 18:38:26.748979   48054 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:38:26.749034   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:38:26.782875   48054 cri.go:89] found id: ""
	I0725 18:38:26.782900   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.782908   48054 logs.go:278] No container was found matching "coredns"
	I0725 18:38:26.782913   48054 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:38:26.782974   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:38:26.814675   48054 cri.go:89] found id: ""
	I0725 18:38:26.814703   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.814713   48054 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:38:26.814721   48054 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:38:26.814778   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:38:26.846014   48054 cri.go:89] found id: ""
	I0725 18:38:26.846043   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.846051   48054 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:38:26.846056   48054 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:38:26.846112   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:38:26.878015   48054 cri.go:89] found id: ""
	I0725 18:38:26.878044   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.878055   48054 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:38:26.878062   48054 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:38:26.878118   48054 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:38:26.910833   48054 cri.go:89] found id: ""
	I0725 18:38:26.910861   48054 logs.go:276] 0 containers: []
	W0725 18:38:26.910869   48054 logs.go:278] No container was found matching "kindnet"
	I0725 18:38:26.910878   48054 logs.go:123] Gathering logs for kubelet ...
	I0725 18:38:26.910890   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:38:26.968411   48054 logs.go:123] Gathering logs for dmesg ...
	I0725 18:38:26.968440   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:38:26.981237   48054 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:38:26.981266   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:38:27.096530   48054 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:38:27.096557   48054 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:38:27.096572   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:38:27.193170   48054 logs.go:123] Gathering logs for container status ...
	I0725 18:38:27.193209   48054 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0725 18:38:27.231622   48054 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:38:27.231671   48054 out.go:239] * 
	* 
	W0725 18:38:27.231733   48054 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:38:27.231764   48054 out.go:239] * 
	* 
	W0725 18:38:27.232701   48054 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:38:27.235727   48054 out.go:177] 
	W0725 18:38:27.236892   48054 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:38:27.236946   48054 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:38:27.236964   48054 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:38:27.238499   48054 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-069209
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-069209: (6.302680603s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-069209 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-069209 status --format={{.Host}}: exit status 7 (64.598427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.176370784s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-069209 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.023223ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-069209
	    minikube start -p kubernetes-upgrade-069209 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0692092 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-069209 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-069209 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.030197223s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-25 18:41:02.999697224 +0000 UTC m=+4337.080429081
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-069209 -n kubernetes-upgrade-069209
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-069209 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-069209 logs -n 25: (1.779310835s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-889508 sudo                  | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | systemctl status containerd            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo                  | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | systemctl cat containerd               |                              |         |         |                     |                     |
	|         | --no-pager                             |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo cat              | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo cat              | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | /etc/containerd/config.toml            |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo                  | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | containerd config dump                 |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo                  | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | systemctl status crio --all            |                              |         |         |                     |                     |
	|         | --full --no-pager                      |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo                  | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | systemctl cat crio --no-pager          |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo find             | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                              |         |         |                     |                     |
	| ssh     | -p cilium-889508 sudo crio             | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC |                     |
	|         | config                                 |                              |         |         |                     |                     |
	| delete  | -p cilium-889508                       | cilium-889508                | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:38 UTC |
	| start   | -p cert-options-091318                 | cert-options-091318          | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:39 UTC |
	|         | --memory=2048                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-160946              | stopped-upgrade-160946       | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:38 UTC |
	| start   | -p force-systemd-flag-267077           | force-systemd-flag-267077    | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:39 UTC |
	|         | --memory=2048 --force-systemd          |                              |         |         |                     |                     |
	|         | --alsologtostderr                      |                              |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-069209           | kubernetes-upgrade-069209    | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:38 UTC |
	| start   | -p kubernetes-upgrade-069209           | kubernetes-upgrade-069209    | jenkins | v1.33.1 | 25 Jul 24 18:38 UTC | 25 Jul 24 18:39 UTC |
	|         | --memory=2200                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                              |         |         |                     |                     |
	|         | --alsologtostderr                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	| ssh     | cert-options-091318 ssh                | cert-options-091318          | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	|         | openssl x509 -text -noout -in          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-091318 -- sudo         | cert-options-091318          | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                              |         |         |                     |                     |
	| delete  | -p cert-options-091318                 | cert-options-091318          | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	| start   | -p old-k8s-version-108542              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC |                     |
	|         | --memory=2200                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                              |         |         |                     |                     |
	|         | --kvm-network=default                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                |                              |         |         |                     |                     |
	|         | --keep-context=false                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                              |         |         |                     |                     |
	| ssh     | force-systemd-flag-267077 ssh cat      | force-systemd-flag-267077    | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-267077           | force-systemd-flag-267077    | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	| delete  | -p                                     | disable-driver-mounts-045154 | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:39 UTC |
	|         | disable-driver-mounts-045154           |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC |                     |
	|         | --alsologtostderr --wait=true          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-069209           | kubernetes-upgrade-069209    | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC |                     |
	|         | --memory=2200                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                              |         |         |                     |                     |
	|         | --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-069209           | kubernetes-upgrade-069209    | jenkins | v1.33.1 | 25 Jul 24 18:39 UTC | 25 Jul 24 18:41 UTC |
	|         | --memory=2200                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                              |         |         |                     |                     |
	|         | --alsologtostderr                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio               |                              |         |         |                     |                     |
	|---------|----------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:39:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:39:56.006658   56114 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:39:56.006791   56114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:39:56.006801   56114 out.go:304] Setting ErrFile to fd 2...
	I0725 18:39:56.006808   56114 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:39:56.007018   56114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:39:56.007536   56114 out.go:298] Setting JSON to false
	I0725 18:39:56.008529   56114 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4940,"bootTime":1721927856,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:39:56.008586   56114 start.go:139] virtualization: kvm guest
	I0725 18:39:56.010568   56114 out.go:177] * [kubernetes-upgrade-069209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:39:56.011933   56114 notify.go:220] Checking for updates...
	I0725 18:39:56.011948   56114 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:39:56.013496   56114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:39:56.014766   56114 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:39:56.015957   56114 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:56.017152   56114 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:39:56.018375   56114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:39:56.019838   56114 config.go:182] Loaded profile config "kubernetes-upgrade-069209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:39:56.020231   56114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:39:56.020264   56114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:39:56.036524   56114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
	I0725 18:39:56.036926   56114 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:39:56.037469   56114 main.go:141] libmachine: Using API Version  1
	I0725 18:39:56.037487   56114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:39:56.037815   56114 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:39:56.037994   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:39:56.038237   56114 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:39:56.038610   56114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:39:56.038641   56114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:39:56.053094   56114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I0725 18:39:56.053455   56114 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:39:56.053850   56114 main.go:141] libmachine: Using API Version  1
	I0725 18:39:56.053906   56114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:39:56.054253   56114 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:39:56.054467   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:39:56.089563   56114 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:39:56.090721   56114 start.go:297] selected driver: kvm2
	I0725 18:39:56.090733   56114 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:39:56.090857   56114 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:39:56.091532   56114 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:39:56.091614   56114 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:39:56.106616   56114 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:39:56.106972   56114 cni.go:84] Creating CNI manager for ""
	I0725 18:39:56.106987   56114 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:39:56.107023   56114 start.go:340] cluster config:
	{Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-069209 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:39:56.107118   56114 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:39:56.108939   56114 out.go:177] * Starting "kubernetes-upgrade-069209" primary control-plane node in "kubernetes-upgrade-069209" cluster
	I0725 18:39:53.827361   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:53.827864   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:53.827887   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:53.827809   55829 retry.go:31] will retry after 3.500004092s: waiting for machine to come up
	I0725 18:39:57.329555   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.330152   55363 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:39:57.330175   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.330183   55363 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:39:57.330607   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542
	I0725 18:39:57.408265   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:39:57.408311   55363 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:39:57.408373   55363 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:39:57.411276   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.411733   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.411757   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.411955   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:39:57.411983   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:39:57.412022   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:39:57.412033   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:39:57.412075   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:39:57.544664   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:39:57.544944   55363 main.go:141] libmachine: (old-k8s-version-108542) KVM machine creation complete!
	I0725 18:39:57.545303   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:39:57.545986   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:57.546201   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:57.546394   55363 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:39:57.546412   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:39:57.547916   55363 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:39:57.547935   55363 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:39:57.547944   55363 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:39:57.547954   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.550582   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.551037   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.551079   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.551230   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.551402   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.551534   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.551698   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.551894   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.552076   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.552088   55363 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:39:57.667498   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:39:57.667531   55363 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:39:57.667541   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.670600   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.671047   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.671080   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.671197   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.671394   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.671579   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.671763   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.671923   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.672243   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.672259   55363 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:39:57.784812   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:39:57.784871   55363 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:39:57.784879   55363 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:39:57.784890   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:57.785189   55363 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:39:57.785222   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:57.785436   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.788306   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.788747   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.788788   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.788974   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.789142   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.789331   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.789475   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.789673   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.789898   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.789916   55363 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:39:57.917676   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:39:57.917713   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.920786   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.921301   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.921334   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.921518   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.921725   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.921942   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.922087   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.922291   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.922498   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.922522   55363 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:39:58.053079   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:39:58.053111   55363 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:39:58.053172   55363 buildroot.go:174] setting up certificates
	I0725 18:39:58.053182   55363 provision.go:84] configureAuth start
	I0725 18:39:58.053204   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:58.053513   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:58.056481   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.056860   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.056891   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.057079   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.059354   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.059764   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.059789   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.059991   55363 provision.go:143] copyHostCerts
	I0725 18:39:58.060076   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:39:58.060096   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:39:58.060169   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:39:58.060345   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:39:58.060360   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:39:58.060402   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:39:58.060506   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:39:58.060517   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:39:58.060546   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:39:58.060618   55363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:39:59.077030   55806 start.go:364] duration metric: took 24.066086021s to acquireMachinesLock for "no-preload-371663"
	I0725 18:39:59.077104   55806 start.go:93] Provisioning new machine with config: &{Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:39:59.077217   55806 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 18:39:59.079286   55806 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 18:39:59.079486   55806 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:39:59.079522   55806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:39:59.099459   55806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0725 18:39:59.099864   55806 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:39:59.100447   55806 main.go:141] libmachine: Using API Version  1
	I0725 18:39:59.100472   55806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:39:59.100845   55806 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:39:59.101060   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:39:59.101206   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:39:59.101399   55806 start.go:159] libmachine.API.Create for "no-preload-371663" (driver="kvm2")
	I0725 18:39:59.101423   55806 client.go:168] LocalClient.Create starting
	I0725 18:39:59.101460   55806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 18:39:59.101500   55806 main.go:141] libmachine: Decoding PEM data...
	I0725 18:39:59.101518   55806 main.go:141] libmachine: Parsing certificate...
	I0725 18:39:59.101591   55806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 18:39:59.101625   55806 main.go:141] libmachine: Decoding PEM data...
	I0725 18:39:59.101642   55806 main.go:141] libmachine: Parsing certificate...
	I0725 18:39:59.101667   55806 main.go:141] libmachine: Running pre-create checks...
	I0725 18:39:59.101681   55806 main.go:141] libmachine: (no-preload-371663) Calling .PreCreateCheck
	I0725 18:39:59.102017   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:39:59.102432   55806 main.go:141] libmachine: Creating machine...
	I0725 18:39:59.102447   55806 main.go:141] libmachine: (no-preload-371663) Calling .Create
	I0725 18:39:59.102566   55806 main.go:141] libmachine: (no-preload-371663) Creating KVM machine...
	I0725 18:39:59.103888   55806 main.go:141] libmachine: (no-preload-371663) DBG | found existing default KVM network
	I0725 18:39:59.105763   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.105606   56177 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:c5:9a} reservation:<nil>}
	I0725 18:39:59.106805   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.106721   56177 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3b:40:31} reservation:<nil>}
	I0725 18:39:59.107879   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.107806   56177 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:7e:8e} reservation:<nil>}
	I0725 18:39:59.109142   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.109061   56177 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3890}
	I0725 18:39:59.109164   55806 main.go:141] libmachine: (no-preload-371663) DBG | created network xml: 
	I0725 18:39:59.109172   55806 main.go:141] libmachine: (no-preload-371663) DBG | <network>
	I0725 18:39:59.109177   55806 main.go:141] libmachine: (no-preload-371663) DBG |   <name>mk-no-preload-371663</name>
	I0725 18:39:59.109184   55806 main.go:141] libmachine: (no-preload-371663) DBG |   <dns enable='no'/>
	I0725 18:39:59.109194   55806 main.go:141] libmachine: (no-preload-371663) DBG |   
	I0725 18:39:59.109203   55806 main.go:141] libmachine: (no-preload-371663) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0725 18:39:59.109209   55806 main.go:141] libmachine: (no-preload-371663) DBG |     <dhcp>
	I0725 18:39:59.109215   55806 main.go:141] libmachine: (no-preload-371663) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0725 18:39:59.109223   55806 main.go:141] libmachine: (no-preload-371663) DBG |     </dhcp>
	I0725 18:39:59.109229   55806 main.go:141] libmachine: (no-preload-371663) DBG |   </ip>
	I0725 18:39:59.109234   55806 main.go:141] libmachine: (no-preload-371663) DBG |   
	I0725 18:39:59.109265   55806 main.go:141] libmachine: (no-preload-371663) DBG | </network>
	I0725 18:39:59.109284   55806 main.go:141] libmachine: (no-preload-371663) DBG | 
	I0725 18:39:59.114618   55806 main.go:141] libmachine: (no-preload-371663) DBG | trying to create private KVM network mk-no-preload-371663 192.168.72.0/24...
	I0725 18:39:59.185180   55806 main.go:141] libmachine: (no-preload-371663) DBG | private KVM network mk-no-preload-371663 192.168.72.0/24 created
	I0725 18:39:59.185218   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.185104   56177 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:59.185232   55806 main.go:141] libmachine: (no-preload-371663) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663 ...
	I0725 18:39:59.185254   55806 main.go:141] libmachine: (no-preload-371663) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 18:39:59.185275   55806 main.go:141] libmachine: (no-preload-371663) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 18:39:59.445648   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.445530   56177 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa...
	I0725 18:39:59.547744   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.547621   56177 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/no-preload-371663.rawdisk...
	I0725 18:39:59.547777   55806 main.go:141] libmachine: (no-preload-371663) DBG | Writing magic tar header
	I0725 18:39:59.547798   55806 main.go:141] libmachine: (no-preload-371663) DBG | Writing SSH key tar header
	I0725 18:39:59.547810   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:39:59.547733   56177 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663 ...
	I0725 18:39:59.547869   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663
	I0725 18:39:59.547906   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 18:39:59.547922   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663 (perms=drwx------)
	I0725 18:39:59.547932   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:59.547953   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 18:39:59.547970   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 18:39:59.547979   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 18:39:59.547990   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home/jenkins
	I0725 18:39:59.547998   55806 main.go:141] libmachine: (no-preload-371663) DBG | Checking permissions on dir: /home
	I0725 18:39:59.548011   55806 main.go:141] libmachine: (no-preload-371663) DBG | Skipping /home - not owner
	I0725 18:39:59.548058   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 18:39:59.548087   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 18:39:59.548101   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 18:39:59.548118   55806 main.go:141] libmachine: (no-preload-371663) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 18:39:59.548133   55806 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:39:59.549275   55806 main.go:141] libmachine: (no-preload-371663) define libvirt domain using xml: 
	I0725 18:39:59.549297   55806 main.go:141] libmachine: (no-preload-371663) <domain type='kvm'>
	I0725 18:39:59.549307   55806 main.go:141] libmachine: (no-preload-371663)   <name>no-preload-371663</name>
	I0725 18:39:59.549316   55806 main.go:141] libmachine: (no-preload-371663)   <memory unit='MiB'>2200</memory>
	I0725 18:39:59.549351   55806 main.go:141] libmachine: (no-preload-371663)   <vcpu>2</vcpu>
	I0725 18:39:59.549374   55806 main.go:141] libmachine: (no-preload-371663)   <features>
	I0725 18:39:59.549388   55806 main.go:141] libmachine: (no-preload-371663)     <acpi/>
	I0725 18:39:59.549399   55806 main.go:141] libmachine: (no-preload-371663)     <apic/>
	I0725 18:39:59.549412   55806 main.go:141] libmachine: (no-preload-371663)     <pae/>
	I0725 18:39:59.549424   55806 main.go:141] libmachine: (no-preload-371663)     
	I0725 18:39:59.549438   55806 main.go:141] libmachine: (no-preload-371663)   </features>
	I0725 18:39:59.549461   55806 main.go:141] libmachine: (no-preload-371663)   <cpu mode='host-passthrough'>
	I0725 18:39:59.549473   55806 main.go:141] libmachine: (no-preload-371663)   
	I0725 18:39:59.549483   55806 main.go:141] libmachine: (no-preload-371663)   </cpu>
	I0725 18:39:59.549494   55806 main.go:141] libmachine: (no-preload-371663)   <os>
	I0725 18:39:59.549503   55806 main.go:141] libmachine: (no-preload-371663)     <type>hvm</type>
	I0725 18:39:59.549511   55806 main.go:141] libmachine: (no-preload-371663)     <boot dev='cdrom'/>
	I0725 18:39:59.549522   55806 main.go:141] libmachine: (no-preload-371663)     <boot dev='hd'/>
	I0725 18:39:59.549532   55806 main.go:141] libmachine: (no-preload-371663)     <bootmenu enable='no'/>
	I0725 18:39:59.549542   55806 main.go:141] libmachine: (no-preload-371663)   </os>
	I0725 18:39:59.549551   55806 main.go:141] libmachine: (no-preload-371663)   <devices>
	I0725 18:39:59.549563   55806 main.go:141] libmachine: (no-preload-371663)     <disk type='file' device='cdrom'>
	I0725 18:39:59.549580   55806 main.go:141] libmachine: (no-preload-371663)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/boot2docker.iso'/>
	I0725 18:39:59.549606   55806 main.go:141] libmachine: (no-preload-371663)       <target dev='hdc' bus='scsi'/>
	I0725 18:39:59.549618   55806 main.go:141] libmachine: (no-preload-371663)       <readonly/>
	I0725 18:39:59.549624   55806 main.go:141] libmachine: (no-preload-371663)     </disk>
	I0725 18:39:59.549636   55806 main.go:141] libmachine: (no-preload-371663)     <disk type='file' device='disk'>
	I0725 18:39:59.549649   55806 main.go:141] libmachine: (no-preload-371663)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 18:39:59.549666   55806 main.go:141] libmachine: (no-preload-371663)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/no-preload-371663.rawdisk'/>
	I0725 18:39:59.549683   55806 main.go:141] libmachine: (no-preload-371663)       <target dev='hda' bus='virtio'/>
	I0725 18:39:59.549692   55806 main.go:141] libmachine: (no-preload-371663)     </disk>
	I0725 18:39:59.549702   55806 main.go:141] libmachine: (no-preload-371663)     <interface type='network'>
	I0725 18:39:59.549713   55806 main.go:141] libmachine: (no-preload-371663)       <source network='mk-no-preload-371663'/>
	I0725 18:39:59.549720   55806 main.go:141] libmachine: (no-preload-371663)       <model type='virtio'/>
	I0725 18:39:59.549729   55806 main.go:141] libmachine: (no-preload-371663)     </interface>
	I0725 18:39:59.549740   55806 main.go:141] libmachine: (no-preload-371663)     <interface type='network'>
	I0725 18:39:59.549769   55806 main.go:141] libmachine: (no-preload-371663)       <source network='default'/>
	I0725 18:39:59.549792   55806 main.go:141] libmachine: (no-preload-371663)       <model type='virtio'/>
	I0725 18:39:59.549805   55806 main.go:141] libmachine: (no-preload-371663)     </interface>
	I0725 18:39:59.549812   55806 main.go:141] libmachine: (no-preload-371663)     <serial type='pty'>
	I0725 18:39:59.549824   55806 main.go:141] libmachine: (no-preload-371663)       <target port='0'/>
	I0725 18:39:59.549834   55806 main.go:141] libmachine: (no-preload-371663)     </serial>
	I0725 18:39:59.549846   55806 main.go:141] libmachine: (no-preload-371663)     <console type='pty'>
	I0725 18:39:59.549858   55806 main.go:141] libmachine: (no-preload-371663)       <target type='serial' port='0'/>
	I0725 18:39:59.549870   55806 main.go:141] libmachine: (no-preload-371663)     </console>
	I0725 18:39:59.549884   55806 main.go:141] libmachine: (no-preload-371663)     <rng model='virtio'>
	I0725 18:39:59.549897   55806 main.go:141] libmachine: (no-preload-371663)       <backend model='random'>/dev/random</backend>
	I0725 18:39:59.549904   55806 main.go:141] libmachine: (no-preload-371663)     </rng>
	I0725 18:39:59.549917   55806 main.go:141] libmachine: (no-preload-371663)     
	I0725 18:39:59.549923   55806 main.go:141] libmachine: (no-preload-371663)     
	I0725 18:39:59.549929   55806 main.go:141] libmachine: (no-preload-371663)   </devices>
	I0725 18:39:59.549935   55806 main.go:141] libmachine: (no-preload-371663) </domain>
	I0725 18:39:59.549943   55806 main.go:141] libmachine: (no-preload-371663) 
	I0725 18:39:59.553679   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:f2:9f:bf in network default
	I0725 18:39:59.554339   55806 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:39:59.554363   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:39:59.555012   55806 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:39:59.555323   55806 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:39:59.555867   55806 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:39:59.556651   55806 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:39:58.385792   55363 provision.go:177] copyRemoteCerts
	I0725 18:39:58.385865   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:39:58.385913   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.389196   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.389616   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.389646   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.389890   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.390102   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.390308   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.390457   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:58.478436   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:39:58.501251   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:39:58.527373   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:39:58.549842   55363 provision.go:87] duration metric: took 496.643249ms to configureAuth
	I0725 18:39:58.549872   55363 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:39:58.550076   55363 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:39:58.550159   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.552643   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.553003   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.553034   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.553164   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.553368   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.553557   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.553700   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.553938   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:58.554160   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:58.554177   55363 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:39:58.825636   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:39:58.825665   55363 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:39:58.825676   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetURL
	I0725 18:39:58.826956   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using libvirt version 6000000
	I0725 18:39:58.829526   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.829895   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.829917   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.830108   55363 main.go:141] libmachine: Docker is up and running!
	I0725 18:39:58.830124   55363 main.go:141] libmachine: Reticulating splines...
	I0725 18:39:58.830131   55363 client.go:171] duration metric: took 22.781331722s to LocalClient.Create
	I0725 18:39:58.830158   55363 start.go:167] duration metric: took 22.781400806s to libmachine.API.Create "old-k8s-version-108542"
	I0725 18:39:58.830171   55363 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:39:58.830205   55363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:39:58.830227   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:58.830470   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:39:58.830495   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.832941   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.833399   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.833426   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.833564   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.833719   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.833856   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.833990   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:58.918647   55363 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:39:58.922535   55363 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:39:58.922561   55363 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:39:58.922626   55363 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:39:58.922709   55363 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:39:58.922795   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:39:58.931764   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:39:58.954933   55363 start.go:296] duration metric: took 124.733843ms for postStartSetup
	I0725 18:39:58.954985   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:39:58.955577   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:58.958702   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.959459   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.959496   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.959717   55363 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:39:58.959955   55363 start.go:128] duration metric: took 22.93486958s to createHost
	I0725 18:39:58.959982   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.962374   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.962692   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.962719   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.962843   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.963023   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.963240   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.963443   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.963592   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:58.963772   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:58.963858   55363 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:39:59.076848   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932799.042325621
	
	I0725 18:39:59.076869   55363 fix.go:216] guest clock: 1721932799.042325621
	I0725 18:39:59.076878   55363 fix.go:229] Guest: 2024-07-25 18:39:59.042325621 +0000 UTC Remote: 2024-07-25 18:39:58.959970358 +0000 UTC m=+50.903762414 (delta=82.355263ms)
	I0725 18:39:59.076925   55363 fix.go:200] guest clock delta is within tolerance: 82.355263ms
	I0725 18:39:59.076933   55363 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 23.052020473s
	I0725 18:39:59.076967   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.077243   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:59.080294   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.080660   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.080689   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.080923   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081563   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081738   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081816   55363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:39:59.081870   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:59.082011   55363 ssh_runner.go:195] Run: cat /version.json
	I0725 18:39:59.082040   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:59.085042   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085208   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085413   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.085442   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085604   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:59.085615   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.085651   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085775   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:59.085838   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:59.085947   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:59.086033   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:59.086107   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:59.086469   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:59.086690   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:59.209375   55363 ssh_runner.go:195] Run: systemctl --version
	I0725 18:39:59.215649   55363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:39:59.376812   55363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:39:59.382914   55363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:39:59.382994   55363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:39:59.399576   55363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:39:59.399602   55363 start.go:495] detecting cgroup driver to use...
	I0725 18:39:59.399665   55363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:39:59.417234   55363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:39:59.430697   55363 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:39:59.430764   55363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:39:59.446466   55363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:39:59.460997   55363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:39:59.585882   55363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:39:59.730360   55363 docker.go:233] disabling docker service ...
	I0725 18:39:59.730417   55363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:39:59.748258   55363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:39:59.761130   55363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:39:59.905267   55363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:40:00.024831   55363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:40:00.039802   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:40:00.057521   55363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:40:00.057574   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.066917   55363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:40:00.066992   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.076664   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.086490   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.095845   55363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:40:00.105984   55363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:40:00.114771   55363 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:40:00.114833   55363 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:40:00.127039   55363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:40:00.136671   55363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:00.266606   55363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:40:00.413007   55363 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:40:00.413084   55363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:40:00.417628   55363 start.go:563] Will wait 60s for crictl version
	I0725 18:40:00.417694   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:00.421134   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:40:00.460209   55363 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:40:00.460295   55363 ssh_runner.go:195] Run: crio --version
	I0725 18:40:00.487130   55363 ssh_runner.go:195] Run: crio --version
	I0725 18:40:00.519004   55363 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:39:56.110152   56114 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:39:56.110186   56114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0725 18:39:56.110193   56114 cache.go:56] Caching tarball of preloaded images
	I0725 18:39:56.110268   56114 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:39:56.110278   56114 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0725 18:39:56.110359   56114 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/config.json ...
	I0725 18:39:56.110531   56114 start.go:360] acquireMachinesLock for kubernetes-upgrade-069209: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:40:00.520234   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:40:00.523718   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:40:00.524159   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:40:00.524195   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:40:00.524432   55363 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:40:00.529740   55363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:40:00.545736   55363 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:40:00.545874   55363 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:40:00.545935   55363 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:00.584454   55363 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:40:00.584531   55363 ssh_runner.go:195] Run: which lz4
	I0725 18:40:00.588881   55363 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:40:00.592738   55363 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:40:00.592795   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:40:02.105945   55363 crio.go:462] duration metric: took 1.517092405s to copy over tarball
	I0725 18:40:02.106041   55363 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:40:00.876133   55806 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:40:00.876971   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:00.877410   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:00.877437   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:00.877405   56177 retry.go:31] will retry after 234.110377ms: waiting for machine to come up
	I0725 18:40:01.113015   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:01.113507   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:01.113525   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:01.113455   56177 retry.go:31] will retry after 278.079519ms: waiting for machine to come up
	I0725 18:40:01.392830   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:01.393391   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:01.393425   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:01.393353   56177 retry.go:31] will retry after 334.940772ms: waiting for machine to come up
	I0725 18:40:01.730165   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:01.730865   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:01.730895   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:01.730836   56177 retry.go:31] will retry after 490.732172ms: waiting for machine to come up
	I0725 18:40:02.223266   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:02.223849   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:02.223879   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:02.223793   56177 retry.go:31] will retry after 541.899014ms: waiting for machine to come up
	I0725 18:40:02.767697   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:02.768286   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:02.768348   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:02.768234   56177 retry.go:31] will retry after 633.623639ms: waiting for machine to come up
	I0725 18:40:03.403173   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:03.403752   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:03.403808   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:03.403713   56177 retry.go:31] will retry after 861.590353ms: waiting for machine to come up
	I0725 18:40:04.266936   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:04.267443   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:04.267471   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:04.267402   56177 retry.go:31] will retry after 1.456306993s: waiting for machine to come up
	I0725 18:40:04.747559   55363 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.641480879s)
	I0725 18:40:04.747599   55363 crio.go:469] duration metric: took 2.641617846s to extract the tarball
	I0725 18:40:04.747610   55363 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:40:04.789962   55363 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:04.835104   55363 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:40:04.835134   55363 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:40:04.835204   55363 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:04.835261   55363 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:04.835269   55363 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:04.835283   55363 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:04.835244   55363 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:04.835325   55363 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:40:04.835244   55363 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:04.835561   55363 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:04.836832   55363 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:40:04.836851   55363 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:04.836858   55363 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:04.836831   55363 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:04.836877   55363 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:04.836832   55363 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:04.836835   55363 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:04.837185   55363 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.063753   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.074051   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:40:05.092907   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.093187   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.095225   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.106782   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.146697   55363 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:40:05.146761   55363 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.146829   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.159970   55363 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:40:05.160009   55363 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:40:05.160052   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.199196   55363 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:40:05.199241   55363 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.199291   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.200833   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.250337   55363 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:40:05.250385   55363 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.250435   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.256798   55363 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:40:05.256840   55363 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.256859   55363 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:40:05.256880   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.256888   55363 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.256912   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.256931   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:40:05.256936   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.257017   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.274907   55363 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:40:05.274948   55363 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.274944   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.275006   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.275947   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.365454   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.365502   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:40:05.365541   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:40:05.365593   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:40:05.365601   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:40:05.365652   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.375451   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:40:05.410490   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:40:05.410755   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:40:05.679221   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:05.824392   55363 cache_images.go:92] duration metric: took 989.23997ms to LoadCachedImages
	W0725 18:40:05.824487   55363 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0725 18:40:05.824505   55363 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:40:05.824653   55363 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:40:05.824735   55363 ssh_runner.go:195] Run: crio config
	I0725 18:40:05.872925   55363 cni.go:84] Creating CNI manager for ""
	I0725 18:40:05.872942   55363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:40:05.872950   55363 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:40:05.872967   55363 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:40:05.873082   55363 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:40:05.873156   55363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:40:05.883172   55363 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:40:05.883242   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:40:05.892444   55363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:40:05.910094   55363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:40:05.927108   55363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:40:05.944764   55363 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:40:05.948359   55363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:40:05.960525   55363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:06.091832   55363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:40:06.108764   55363 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:40:06.108788   55363 certs.go:194] generating shared ca certs ...
	I0725 18:40:06.108807   55363 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.108952   55363 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:40:06.109018   55363 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:40:06.109030   55363 certs.go:256] generating profile certs ...
	I0725 18:40:06.109096   55363 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:40:06.109114   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt with IP's: []
	I0725 18:40:06.211721   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt ...
	I0725 18:40:06.211754   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: {Name:mk8328536e6d3e3be7b69becd8ce6118480d4a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.211946   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key ...
	I0725 18:40:06.211965   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key: {Name:mk8e33c79977a60da7b73fdc37309f0c31106033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.212070   55363 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:40:06.212090   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.29]
	I0725 18:40:06.367802   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 ...
	I0725 18:40:06.367833   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0: {Name:mk91608cd2a2de482eeb1632fee3d4305bd1201d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.368013   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0 ...
	I0725 18:40:06.368031   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0: {Name:mk4bb7258cc724f63f302746925003a7acfe5435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.368122   55363 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt
	I0725 18:40:06.368242   55363 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key
	I0725 18:40:06.368345   55363 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:40:06.368369   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt with IP's: []
	I0725 18:40:06.502724   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt ...
	I0725 18:40:06.502756   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt: {Name:mk92d2a6177ff7a114ce9ed043355ebaa1c7b554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.582324   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key ...
	I0725 18:40:06.582381   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key: {Name:mk0843904be2b18411f9215c4b88ee807d70f9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.582628   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:40:06.582680   55363 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:40:06.582696   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:40:06.582724   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:40:06.582752   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:40:06.582776   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:40:06.582823   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:40:06.583537   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:40:06.610789   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:40:06.637514   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:40:06.666644   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:40:06.689890   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:40:06.729398   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:40:06.755382   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:40:06.780347   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:40:06.805821   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:40:06.828814   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:40:06.852649   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:40:06.876771   55363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:40:06.893439   55363 ssh_runner.go:195] Run: openssl version
	I0725 18:40:06.898916   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:40:06.909715   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.913779   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.913835   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.919070   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:40:06.929184   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:40:06.939501   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.943820   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.943885   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.949830   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:40:06.963133   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:40:06.974988   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.983118   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.983196   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.992387   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:40:07.006291   55363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:40:07.010904   55363 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:40:07.010974   55363 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:40:07.011080   55363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:40:07.011158   55363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:40:07.065948   55363 cri.go:89] found id: ""
	I0725 18:40:07.066029   55363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:40:07.076406   55363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:40:07.085776   55363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:40:07.094830   55363 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:40:07.094850   55363 kubeadm.go:157] found existing configuration files:
	
	I0725 18:40:07.094890   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:40:07.103810   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:40:07.103880   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:40:07.113126   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:40:07.121382   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:40:07.121441   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:40:07.129987   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:40:07.138871   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:40:07.138935   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:40:07.147572   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:40:07.159201   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:40:07.159267   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:40:07.168883   55363 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:40:07.286011   55363 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:40:07.286257   55363 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:40:07.441437   55363 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:40:07.441654   55363 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:40:07.441804   55363 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:40:07.636278   55363 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:40:07.848915   55363 out.go:204]   - Generating certificates and keys ...
	I0725 18:40:07.849053   55363 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:40:07.849155   55363 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:40:07.849254   55363 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:40:07.927608   55363 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:40:08.047052   55363 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:40:05.725636   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:05.726138   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:05.726172   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:05.726086   56177 retry.go:31] will retry after 1.7337329s: waiting for machine to come up
	I0725 18:40:07.461359   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:07.461898   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:07.461926   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:07.461858   56177 retry.go:31] will retry after 2.032147319s: waiting for machine to come up
	I0725 18:40:09.495572   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:09.496059   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:09.496086   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:09.496013   56177 retry.go:31] will retry after 1.876418642s: waiting for machine to come up
	I0725 18:40:08.153679   55363 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:40:08.307176   55363 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:40:08.307492   55363 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0725 18:40:08.375108   55363 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:40:08.375273   55363 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0725 18:40:08.586311   55363 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:40:08.691328   55363 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:40:08.744404   55363 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:40:08.744656   55363 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:40:08.956232   55363 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:40:09.604946   55363 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:40:09.900696   55363 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:40:10.076436   55363 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:40:10.094477   55363 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:40:10.094611   55363 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:40:10.094673   55363 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:40:10.240894   55363 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:40:10.397143   55363 out.go:204]   - Booting up control plane ...
	I0725 18:40:10.397296   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:40:10.397398   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:40:10.397492   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:40:10.397598   55363 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:40:10.397786   55363 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:40:11.373761   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:11.374193   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:11.374220   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:11.374133   56177 retry.go:31] will retry after 2.712165737s: waiting for machine to come up
	I0725 18:40:14.089416   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:14.089881   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:14.089908   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:14.089833   56177 retry.go:31] will retry after 4.38374452s: waiting for machine to come up
	I0725 18:40:18.477307   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:18.477692   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:40:18.477719   55806 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:40:18.477658   56177 retry.go:31] will retry after 5.299176966s: waiting for machine to come up
	I0725 18:40:23.782442   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:23.782957   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:23.782972   55806 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:40:23.782981   55806 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:40:23.783337   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663
	I0725 18:40:23.855891   55806 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:40:23.855924   55806 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:40:23.855939   55806 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:40:23.858726   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:23.859029   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663
	I0725 18:40:23.859055   55806 main.go:141] libmachine: (no-preload-371663) DBG | unable to find defined IP address of network mk-no-preload-371663 interface with MAC address 52:54:00:dc:2b:39
	I0725 18:40:23.859208   55806 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:40:23.859254   55806 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:40:23.859292   55806 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:40:23.859310   55806 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:40:23.859326   55806 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:40:23.862889   55806 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: exit status 255: 
	I0725 18:40:23.862908   55806 main.go:141] libmachine: (no-preload-371663) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0725 18:40:23.862915   55806 main.go:141] libmachine: (no-preload-371663) DBG | command : exit 0
	I0725 18:40:23.862920   55806 main.go:141] libmachine: (no-preload-371663) DBG | err     : exit status 255
	I0725 18:40:23.862927   55806 main.go:141] libmachine: (no-preload-371663) DBG | output  : 
	I0725 18:40:28.257002   56114 start.go:364] duration metric: took 32.146444804s to acquireMachinesLock for "kubernetes-upgrade-069209"
	I0725 18:40:28.257054   56114 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:40:28.257065   56114 fix.go:54] fixHost starting: 
	I0725 18:40:28.257538   56114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:40:28.257593   56114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:40:28.274364   56114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0725 18:40:28.274773   56114 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:40:28.275219   56114 main.go:141] libmachine: Using API Version  1
	I0725 18:40:28.275242   56114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:40:28.275563   56114 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:40:28.275744   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:28.275893   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetState
	I0725 18:40:28.277436   56114 fix.go:112] recreateIfNeeded on kubernetes-upgrade-069209: state=Running err=<nil>
	W0725 18:40:28.277457   56114 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:40:28.279341   56114 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-069209" VM ...
	I0725 18:40:26.864522   55806 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:40:26.867113   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:26.867519   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:26.867550   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:26.867663   55806 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:40:26.867688   55806 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:40:26.867742   55806 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:40:26.867774   55806 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:40:26.867789   55806 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:40:26.992415   55806 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:40:26.992734   55806 main.go:141] libmachine: (no-preload-371663) KVM machine creation complete!
	I0725 18:40:26.993025   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:40:26.993635   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:26.993866   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:26.994044   55806 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:40:26.994062   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:40:26.995444   55806 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:40:26.995463   55806 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:40:26.995472   55806 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:40:26.995481   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:26.997788   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:26.998168   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:26.998196   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:26.998386   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:26.998596   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:26.998771   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:26.998930   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:26.999089   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:26.999265   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:26.999277   55806 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:40:27.103412   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:40:27.103436   55806 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:40:27.103443   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.106590   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.106970   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.107012   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.107246   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:27.107489   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.107666   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.107877   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:27.108046   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:27.108247   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:27.108261   55806 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:40:27.208721   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:40:27.208794   55806 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:40:27.208808   55806 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:40:27.208824   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:40:27.209055   55806 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:40:27.209088   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:40:27.209300   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.212181   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.212584   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.212609   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.212688   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:27.212870   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.213037   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.213199   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:27.213373   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:27.213569   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:27.213590   55806 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:40:27.330320   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:40:27.330355   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.333660   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.334025   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.334051   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.334238   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:27.334468   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.334651   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.334790   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:27.334980   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:27.335199   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:27.335223   55806 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:40:27.449735   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:40:27.449772   55806 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:40:27.449828   55806 buildroot.go:174] setting up certificates
	I0725 18:40:27.449842   55806 provision.go:84] configureAuth start
	I0725 18:40:27.449855   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:40:27.450161   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:40:27.453322   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.453708   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.453738   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.453891   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.456344   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.456671   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.456696   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.456852   55806 provision.go:143] copyHostCerts
	I0725 18:40:27.456913   55806 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:40:27.456926   55806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:40:27.456991   55806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:40:27.457118   55806 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:40:27.457130   55806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:40:27.457159   55806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:40:27.457240   55806 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:40:27.457250   55806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:40:27.457275   55806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:40:27.457344   55806 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:40:27.602290   55806 provision.go:177] copyRemoteCerts
	I0725 18:40:27.602352   55806 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:40:27.602376   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.605138   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.605469   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.605494   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.605701   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:27.605906   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.606061   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:27.606184   55806 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:40:27.686738   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:40:27.708846   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:40:27.730339   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:40:27.753159   55806 provision.go:87] duration metric: took 303.303999ms to configureAuth
	I0725 18:40:27.753186   55806 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:40:27.753386   55806 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:40:27.753481   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:27.756305   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.756713   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:27.756740   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:27.756956   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:27.757141   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.757410   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:27.757692   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:27.757906   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:27.758069   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:27.758083   55806 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:40:28.025518   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:40:28.025554   55806 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:40:28.025566   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetURL
	I0725 18:40:28.026953   55806 main.go:141] libmachine: (no-preload-371663) DBG | Using libvirt version 6000000
	I0725 18:40:28.029395   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.029734   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.029773   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.029907   55806 main.go:141] libmachine: Docker is up and running!
	I0725 18:40:28.029928   55806 main.go:141] libmachine: Reticulating splines...
	I0725 18:40:28.029938   55806 client.go:171] duration metric: took 28.928502855s to LocalClient.Create
	I0725 18:40:28.029968   55806 start.go:167] duration metric: took 28.928569604s to libmachine.API.Create "no-preload-371663"
	I0725 18:40:28.029980   55806 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:40:28.029999   55806 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:40:28.030022   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:28.030255   55806 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:40:28.030277   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:28.032564   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.032876   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.032903   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.033002   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:28.033195   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:28.033370   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:28.033509   55806 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:40:28.115418   55806 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:40:28.119337   55806 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:40:28.119369   55806 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:40:28.119428   55806 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:40:28.119493   55806 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:40:28.119576   55806 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:40:28.128129   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:40:28.149729   55806 start.go:296] duration metric: took 119.73491ms for postStartSetup
	I0725 18:40:28.149775   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:40:28.150340   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:40:28.152961   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.153385   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.153425   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.153639   55806 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:40:28.153819   55806 start.go:128] duration metric: took 29.076585791s to createHost
	I0725 18:40:28.153852   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:28.156019   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.156353   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.156383   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.156524   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:28.156700   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:28.156839   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:28.156947   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:28.157104   55806 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:28.157295   55806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:40:28.157308   55806 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:40:28.256836   55806 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932828.235281707
	
	I0725 18:40:28.256862   55806 fix.go:216] guest clock: 1721932828.235281707
	I0725 18:40:28.256871   55806 fix.go:229] Guest: 2024-07-25 18:40:28.235281707 +0000 UTC Remote: 2024-07-25 18:40:28.153836662 +0000 UTC m=+53.254158425 (delta=81.445045ms)
	I0725 18:40:28.256913   55806 fix.go:200] guest clock delta is within tolerance: 81.445045ms
	I0725 18:40:28.256920   55806 start.go:83] releasing machines lock for "no-preload-371663", held for 29.179856657s
	I0725 18:40:28.256951   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:28.257241   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:40:28.260196   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.260657   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.260684   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.260858   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:28.261318   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:28.261493   55806 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:40:28.261584   55806 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:40:28.261633   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:28.261722   55806 ssh_runner.go:195] Run: cat /version.json
	I0725 18:40:28.261746   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:40:28.264223   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.264485   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.264564   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.264601   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.264700   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:28.264828   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:28.264854   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:28.264891   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:28.265100   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:40:28.265112   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:28.265294   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:40:28.265285   55806 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:40:28.265429   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:40:28.265555   55806 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:40:28.340873   55806 ssh_runner.go:195] Run: systemctl --version
	I0725 18:40:28.372049   55806 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:40:28.530181   55806 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:40:28.536026   55806 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:40:28.536102   55806 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:40:28.551544   55806 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:40:28.551573   55806 start.go:495] detecting cgroup driver to use...
	I0725 18:40:28.551647   55806 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:40:28.572546   55806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:40:28.587230   55806 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:40:28.587291   55806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:40:28.602784   55806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:40:28.618011   55806 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:40:28.757330   55806 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:40:28.920023   55806 docker.go:233] disabling docker service ...
	I0725 18:40:28.920095   55806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:40:28.934968   55806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:40:28.948912   55806 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:40:29.105318   55806 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:40:29.229707   55806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:40:29.243503   55806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:40:29.262562   55806 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:40:29.262642   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.274234   55806 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:40:29.274309   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.286075   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.296538   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.309735   55806 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:40:29.319575   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.330542   55806 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.347631   55806 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:29.358517   55806 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:40:29.367618   55806 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:40:29.367695   55806 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:40:29.380068   55806 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:40:29.388496   55806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:29.502946   55806 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:40:29.634144   55806 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:40:29.634235   55806 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:40:29.638600   55806 start.go:563] Will wait 60s for crictl version
	I0725 18:40:29.638649   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:29.641976   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:40:29.678474   55806 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:40:29.678544   55806 ssh_runner.go:195] Run: crio --version
	I0725 18:40:29.705205   55806 ssh_runner.go:195] Run: crio --version
	I0725 18:40:29.734525   55806 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:40:29.735770   55806 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:40:29.738374   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:29.738732   55806 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:40:29.738759   55806 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:40:29.738957   55806 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:40:29.742742   55806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:40:29.756305   55806 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:40:29.756470   55806 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:40:29.756513   55806 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:29.791150   55806 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:40:29.791173   55806 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:40:29.791257   55806 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:29.791270   55806 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:40:29.791317   55806 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:40:29.791349   55806 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:40:29.791375   55806 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:40:29.791319   55806 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:40:29.791352   55806 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:40:29.791375   55806 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:40:29.792735   55806 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:29.792815   55806 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:40:29.792844   55806 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:40:29.792861   55806 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:40:29.792809   55806 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:40:29.792928   55806 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:40:29.792738   55806 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:40:29.792995   55806 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:40:28.280634   56114 machine.go:94] provisionDockerMachine start ...
	I0725 18:40:28.280654   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:28.280851   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:28.283459   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.283920   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.283947   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.284157   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:28.284360   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.284529   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.284673   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:28.284862   56114 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:28.285085   56114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:40:28.285098   56114 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:40:28.393483   56114 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-069209
	
	I0725 18:40:28.393514   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:40:28.393806   56114 buildroot.go:166] provisioning hostname "kubernetes-upgrade-069209"
	I0725 18:40:28.393829   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:40:28.394024   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:28.396846   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.397267   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.397289   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.397489   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:28.397660   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.397798   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.397951   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:28.398147   56114 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:28.398310   56114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:40:28.398322   56114 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-069209 && echo "kubernetes-upgrade-069209" | sudo tee /etc/hostname
	I0725 18:40:28.566095   56114 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-069209
	
	I0725 18:40:28.566124   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:28.569549   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.569967   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.569996   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.570266   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:28.570481   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.570691   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.570867   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:28.571044   56114 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:28.571266   56114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:40:28.571290   56114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-069209' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-069209/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-069209' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:40:28.689004   56114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:40:28.689041   56114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:40:28.689108   56114 buildroot.go:174] setting up certificates
	I0725 18:40:28.689124   56114 provision.go:84] configureAuth start
	I0725 18:40:28.689142   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetMachineName
	I0725 18:40:28.689449   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:40:28.692364   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.692794   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.692810   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.693087   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:28.695514   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.695866   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.695892   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.696015   56114 provision.go:143] copyHostCerts
	I0725 18:40:28.696063   56114 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:40:28.696072   56114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:40:28.696126   56114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:40:28.696224   56114 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:40:28.696232   56114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:40:28.696252   56114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:40:28.696397   56114 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:40:28.696406   56114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:40:28.696427   56114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:40:28.696491   56114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-069209 san=[127.0.0.1 192.168.50.165 kubernetes-upgrade-069209 localhost minikube]
	I0725 18:40:28.929886   56114 provision.go:177] copyRemoteCerts
	I0725 18:40:28.929953   56114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:40:28.929991   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:28.932895   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.933245   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:28.933265   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:28.933457   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:28.933653   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:28.933818   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:28.933979   56114 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:40:29.015011   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 18:40:29.044406   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:40:29.073227   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:40:29.101304   56114 provision.go:87] duration metric: took 412.162877ms to configureAuth
	I0725 18:40:29.101335   56114 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:40:29.101551   56114 config.go:182] Loaded profile config "kubernetes-upgrade-069209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:40:29.101645   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:29.104701   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:29.105100   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:29.105133   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:29.105315   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:29.105638   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:29.105853   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:29.106049   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:29.106258   56114 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:29.106480   56114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:40:29.106505   56114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:40:29.973708   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:40:30.001587   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:40:30.002019   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:40:30.002201   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:40:30.006415   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:40:30.014620   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:40:30.021010   55806 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:40:30.021058   55806 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:40:30.021112   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.065623   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:40:30.156090   55806 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:40:30.156142   55806 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:40:30.156188   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.160656   55806 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:40:30.160675   55806 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:40:30.160697   55806 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:40:30.160707   55806 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:40:30.160729   55806 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0725 18:40:30.160746   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.160758   55806 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0725 18:40:30.160767   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.160792   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.160855   55806 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:40:30.160883   55806 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:40:30.160932   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:40:30.160936   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.164287   55806 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:40:30.164346   55806 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:40:30.164389   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:30.166602   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:40:30.210116   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:40:30.210134   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:40:30.210176   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:40:30.210220   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:40:30.210231   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0725 18:40:30.210235   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:40:30.210265   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:40:30.222024   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:40:30.222118   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:40:30.314970   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:40:30.315081   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:40:30.319698   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0725 18:40:30.319720   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:40:30.319783   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0725 18:40:30.319817   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0725 18:40:30.319858   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:40:30.319795   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:40:30.319915   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:40:30.319931   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:40:30.319795   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10
	I0725 18:40:30.319980   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.0-beta.0': No such file or directory
	I0725 18:40:30.319991   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:40:30.320006   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (30186496 bytes)
	I0725 18:40:30.323914   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0': No such file or directory
	I0725 18:40:30.324207   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (27889152 bytes)
	I0725 18:40:30.358098   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.14-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.14-0': No such file or directory
	I0725 18:40:30.358140   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0': No such file or directory
	I0725 18:40:30.358154   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 --> /var/lib/minikube/images/etcd_3.5.14-0 (56932864 bytes)
	I0725 18:40:30.358175   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (20081152 bytes)
	I0725 18:40:30.358250   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0725 18:40:30.358286   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0725 18:40:30.358318   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0': No such file or directory
	I0725 18:40:30.358360   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (26149888 bytes)
	I0725 18:40:30.537983   55806 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0725 18:40:30.538061   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0725 18:40:30.617803   55806 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:31.156383   55806 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0725 18:40:31.156429   55806 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:40:31.156475   55806 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:40:31.156506   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:40:31.156517   55806 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:31.156559   55806 ssh_runner.go:195] Run: which crictl
	I0725 18:40:33.656403   55806 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.499866998s)
	I0725 18:40:33.656435   55806 ssh_runner.go:235] Completed: which crictl: (2.499851607s)
	I0725 18:40:33.656527   55806 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:33.656442   55806 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:40:33.656631   55806 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:40:33.656685   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:40:33.692577   55806 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:40:33.692689   55806 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:40:35.195276   56114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:40:35.195309   56114 machine.go:97] duration metric: took 6.914660661s to provisionDockerMachine
	I0725 18:40:35.195324   56114 start.go:293] postStartSetup for "kubernetes-upgrade-069209" (driver="kvm2")
	I0725 18:40:35.195344   56114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:40:35.195364   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:35.195838   56114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:40:35.195870   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:35.198980   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.199451   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:35.199479   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.199863   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:35.200067   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:35.200258   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:35.200439   56114 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:40:35.286986   56114 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:40:35.291240   56114 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:40:35.291265   56114 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:40:35.291340   56114 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:40:35.291425   56114 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:40:35.291507   56114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:40:35.301393   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:40:35.325600   56114 start.go:296] duration metric: took 130.254447ms for postStartSetup
	I0725 18:40:35.325650   56114 fix.go:56] duration metric: took 7.06858511s for fixHost
	I0725 18:40:35.325671   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:35.329098   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.329503   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:35.329552   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.329719   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:35.329912   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:35.330126   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:35.330296   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:35.330624   56114 main.go:141] libmachine: Using SSH client type: native
	I0725 18:40:35.330783   56114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0725 18:40:35.330794   56114 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:40:35.441263   56114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932835.430334169
	
	I0725 18:40:35.441290   56114 fix.go:216] guest clock: 1721932835.430334169
	I0725 18:40:35.441299   56114 fix.go:229] Guest: 2024-07-25 18:40:35.430334169 +0000 UTC Remote: 2024-07-25 18:40:35.325654394 +0000 UTC m=+39.352806595 (delta=104.679775ms)
	I0725 18:40:35.441323   56114 fix.go:200] guest clock delta is within tolerance: 104.679775ms
	I0725 18:40:35.441342   56114 start.go:83] releasing machines lock for "kubernetes-upgrade-069209", held for 7.18429749s
	I0725 18:40:35.441366   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:35.441651   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:40:35.444875   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.445334   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:35.445382   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.445571   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:35.446159   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:35.446354   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .DriverName
	I0725 18:40:35.446428   56114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:40:35.446479   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:35.446593   56114 ssh_runner.go:195] Run: cat /version.json
	I0725 18:40:35.446618   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHHostname
	I0725 18:40:35.449320   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.449640   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.449915   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:35.449942   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.450110   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:35.450115   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:35.450129   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:35.450285   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:35.450345   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHPort
	I0725 18:40:35.450520   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:35.450557   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHKeyPath
	I0725 18:40:35.450697   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetSSHUsername
	I0725 18:40:35.450688   56114 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:40:35.450828   56114 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kubernetes-upgrade-069209/id_rsa Username:docker}
	I0725 18:40:35.529625   56114 ssh_runner.go:195] Run: systemctl --version
	I0725 18:40:35.559677   56114 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:40:35.714837   56114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:40:35.724017   56114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:40:35.724089   56114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:40:35.733259   56114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 18:40:35.733286   56114 start.go:495] detecting cgroup driver to use...
	I0725 18:40:35.733403   56114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:40:35.750476   56114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:40:35.766316   56114 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:40:35.766389   56114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:40:35.780291   56114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:40:35.793509   56114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:40:35.947566   56114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:40:36.091474   56114 docker.go:233] disabling docker service ...
	I0725 18:40:36.091547   56114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:40:36.113342   56114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:40:36.127652   56114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:40:36.285644   56114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:40:36.453418   56114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:40:36.567701   56114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:40:36.620457   56114 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:40:36.620521   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:36.654482   56114 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:40:36.654568   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:36.765210   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:36.925622   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:37.011932   56114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:40:37.042982   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:37.062149   56114 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:37.078884   56114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:37.108174   56114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:40:37.123135   56114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:40:37.197719   56114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:37.449770   56114 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:40:38.177525   56114 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:40:38.177619   56114 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:40:38.182400   56114 start.go:563] Will wait 60s for crictl version
	I0725 18:40:38.182456   56114 ssh_runner.go:195] Run: which crictl
	I0725 18:40:38.186088   56114 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:40:38.223134   56114 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:40:38.223217   56114 ssh_runner.go:195] Run: crio --version
	I0725 18:40:38.251682   56114 ssh_runner.go:195] Run: crio --version
	I0725 18:40:38.282495   56114 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:40:35.619798   55806 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.963086557s)
	I0725 18:40:35.619825   55806 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.927117202s)
	I0725 18:40:35.619831   55806 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:40:35.619845   55806 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 18:40:35.619860   55806 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:40:35.619875   55806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0725 18:40:35.619900   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:40:37.706471   55806 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.086548727s)
	I0725 18:40:37.706508   55806 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:40:37.706550   55806 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:40:37.706613   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:40:39.759100   55806 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.05246349s)
	I0725 18:40:39.759129   55806 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:40:39.759150   55806 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:40:39.759228   55806 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:40:38.283722   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) Calling .GetIP
	I0725 18:40:38.286636   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:38.287120   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:50:c6", ip: ""} in network mk-kubernetes-upgrade-069209: {Iface:virbr2 ExpiryTime:2024-07-25 19:34:15 +0000 UTC Type:0 Mac:52:54:00:33:50:c6 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:kubernetes-upgrade-069209 Clientid:01:52:54:00:33:50:c6}
	I0725 18:40:38.287147   56114 main.go:141] libmachine: (kubernetes-upgrade-069209) DBG | domain kubernetes-upgrade-069209 has defined IP address 192.168.50.165 and MAC address 52:54:00:33:50:c6 in network mk-kubernetes-upgrade-069209
	I0725 18:40:38.287396   56114 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:40:38.291532   56114 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:40:38.292009   56114 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:40:38.292144   56114 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:38.337506   56114 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:40:38.337535   56114 crio.go:433] Images already preloaded, skipping extraction
	I0725 18:40:38.337601   56114 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:38.369440   56114 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:40:38.369462   56114 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:40:38.369471   56114 kubeadm.go:934] updating node { 192.168.50.165 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:40:38.369595   56114 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-069209 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:40:38.369672   56114 ssh_runner.go:195] Run: crio config
	I0725 18:40:38.422769   56114 cni.go:84] Creating CNI manager for ""
	I0725 18:40:38.422792   56114 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:40:38.422801   56114 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:40:38.422820   56114 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.165 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-069209 NodeName:kubernetes-upgrade-069209 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:40:38.422952   56114 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-069209"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:40:38.423009   56114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:40:38.433768   56114 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:40:38.433848   56114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:40:38.444132   56114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0725 18:40:38.463297   56114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:40:38.482653   56114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0725 18:40:38.502557   56114 ssh_runner.go:195] Run: grep 192.168.50.165	control-plane.minikube.internal$ /etc/hosts
	I0725 18:40:38.507905   56114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:38.688745   56114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:40:38.732811   56114 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209 for IP: 192.168.50.165
	I0725 18:40:38.732836   56114 certs.go:194] generating shared ca certs ...
	I0725 18:40:38.732868   56114 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:38.733049   56114 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:40:38.733104   56114 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:40:38.733117   56114 certs.go:256] generating profile certs ...
	I0725 18:40:38.733220   56114 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/client.key
	I0725 18:40:38.733279   56114 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key.ade831bd
	I0725 18:40:38.733326   56114 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key
	I0725 18:40:38.733465   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:40:38.733505   56114 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:40:38.733531   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:40:38.733564   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:40:38.733595   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:40:38.733630   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:40:38.733684   56114 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:40:38.734542   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:40:38.966436   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:40:39.126961   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:40:39.195129   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:40:39.234089   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 18:40:39.290587   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:40:39.365352   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:40:39.434090   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kubernetes-upgrade-069209/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:40:39.465506   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:40:39.490808   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:40:39.516349   56114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:40:39.540414   56114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:40:39.557743   56114 ssh_runner.go:195] Run: openssl version
	I0725 18:40:39.563708   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:40:39.574506   56114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:40:39.579211   56114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:40:39.579260   56114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:40:39.588153   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:40:39.600972   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:40:39.639759   56114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:40:39.647816   56114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:40:39.647884   56114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:40:39.655362   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:40:39.675637   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:40:39.690627   56114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:39.695083   56114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:39.695142   56114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:39.703411   56114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:40:39.719642   56114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:40:39.726985   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:40:39.755310   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:40:39.770165   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:40:39.776052   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:40:39.781546   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:40:39.787045   56114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:40:39.792371   56114 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-069209 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-069209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.165 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:40:39.792450   56114 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:40:39.792499   56114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:40:39.830342   56114 cri.go:89] found id: "b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951"
	I0725 18:40:39.830370   56114 cri.go:89] found id: "904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900"
	I0725 18:40:39.830376   56114 cri.go:89] found id: "76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff"
	I0725 18:40:39.830393   56114 cri.go:89] found id: "09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7"
	I0725 18:40:39.830397   56114 cri.go:89] found id: "b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701"
	I0725 18:40:39.830402   56114 cri.go:89] found id: "33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7"
	I0725 18:40:39.830405   56114 cri.go:89] found id: "ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0"
	I0725 18:40:39.830409   56114 cri.go:89] found id: "b421d4a39c7b5cc4e1bfbba3c561dfd80b0bc85bac9393ba99bd4dc9d9705d3d"
	I0725 18:40:39.830412   56114 cri.go:89] found id: "8bf260a1356bdbb4db097a4022cd588bfc18528762dfd75062a9041836de354e"
	I0725 18:40:39.830421   56114 cri.go:89] found id: "50ea24d5de9714c43415d3815874ec3107d59003b75ab30b9ba71a1266c1cb9c"
	I0725 18:40:39.830425   56114 cri.go:89] found id: ""
	I0725 18:40:39.830479   56114 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.753138052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932863753110879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19c70380-bb81-4aec-a9aa-99289931f0f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.753985121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dab37196-f98c-4ca2-868b-970ab49214ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.754042428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dab37196-f98c-4ca2-868b-970ab49214ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.754382368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8044dc5d21e6e80d20d7fc11609a0eb1fec3dd4a77bfc087da54be6b5d0beac1,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860670884466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29da01ce2614d8fbef1a642482143cf59dbedeb7dae25b8a79700dc8f90fe94,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860658692248,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff953de54dfbd31948adc3901b9b2381f896668aafd90241337e4a551ebda031,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721932860648846433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd0e6811653e15136a66328b0fb41feab6f12d7c48c3c941831bec7bb6bef9e,PodSandboxId:4663e1ae5e2ca8c825ab40d542094e102406d0dfb1e8032ed57923036c6ca93a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721932856814392047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853f77882e1ebd61301433669347f0535b74e855150b6ea2f08dc0871917fecc,PodSandboxId:fd32379ea3fa867eb2f5503451e6f79fe8d98929c657045587810847d0aa8f7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,Cr
eatedAt:1721932856782104283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7332a3834e66b0181becf9638dd3df46a1247850d26f4984ac4254b501f834b6,PodSandboxId:78224e4ffb343002bb690b0d0576b6c5e6ccfb1a1541223adb24fb36f3e5abce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721932
856802326819,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d330a47a9b25cafdb42c277fecb833c1da1b83c1373693c523366466d78f08d8,PodSandboxId:79f32705de3ca5ec6e927be3f026ebda30ccd8ffb0928b681d6c587d17e86a1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721932856758596451,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f82e7e3314f2bc6a2375454459faac4324929f66a8011cbb8edb80e465d5a,PodSandboxId:da39d2114245c51e61d4c07bac7a3abc816750b17b253b3a735aa0514a4e6c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172193285224800
5873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721932839851023079,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839615045947,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839346000600,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff,PodSandboxId:2a01630d1aca229dcb07e491a128709272b1cfa9158135ee74c390c29140
731c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721932837246367341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701,PodSandboxId:03a9a6ab5aade1fc1af62c6256887fc877017fe136ea48b02342a37e82fa123f,M
etadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721932837182736565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7,PodSandboxId:b07c978dc42542fbbc162bd99638e1e7451082e469130
e8ce6c9814c2856c7be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721932837240657633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7,PodSandboxId:7896b81d6bc00828e1984a82d4e84df4a658bb49e490b3a08ca
552132bcb82db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721932836955140651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0,PodSandboxId:958f4c4a8ca24f4693c68ccfcdeedb0d496db41aeb6ceefe2921cd90841b9a47,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721932836862374599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dab37196-f98c-4ca2-868b-970ab49214ae name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.810815788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb87960a-af4f-4fff-814e-d77d275b261c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.810906150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb87960a-af4f-4fff-814e-d77d275b261c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.813327056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c45ab8b-6669-4429-a2c1-bce69ea4f53f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.813767888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932863813738967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c45ab8b-6669-4429-a2c1-bce69ea4f53f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.814285704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3434ae30-d4b3-4962-8446-576f0e73df02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.814348738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3434ae30-d4b3-4962-8446-576f0e73df02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.814936210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8044dc5d21e6e80d20d7fc11609a0eb1fec3dd4a77bfc087da54be6b5d0beac1,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860670884466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29da01ce2614d8fbef1a642482143cf59dbedeb7dae25b8a79700dc8f90fe94,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860658692248,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff953de54dfbd31948adc3901b9b2381f896668aafd90241337e4a551ebda031,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721932860648846433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd0e6811653e15136a66328b0fb41feab6f12d7c48c3c941831bec7bb6bef9e,PodSandboxId:4663e1ae5e2ca8c825ab40d542094e102406d0dfb1e8032ed57923036c6ca93a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721932856814392047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853f77882e1ebd61301433669347f0535b74e855150b6ea2f08dc0871917fecc,PodSandboxId:fd32379ea3fa867eb2f5503451e6f79fe8d98929c657045587810847d0aa8f7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,Cr
eatedAt:1721932856782104283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7332a3834e66b0181becf9638dd3df46a1247850d26f4984ac4254b501f834b6,PodSandboxId:78224e4ffb343002bb690b0d0576b6c5e6ccfb1a1541223adb24fb36f3e5abce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721932
856802326819,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d330a47a9b25cafdb42c277fecb833c1da1b83c1373693c523366466d78f08d8,PodSandboxId:79f32705de3ca5ec6e927be3f026ebda30ccd8ffb0928b681d6c587d17e86a1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721932856758596451,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f82e7e3314f2bc6a2375454459faac4324929f66a8011cbb8edb80e465d5a,PodSandboxId:da39d2114245c51e61d4c07bac7a3abc816750b17b253b3a735aa0514a4e6c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172193285224800
5873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721932839851023079,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839615045947,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839346000600,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff,PodSandboxId:2a01630d1aca229dcb07e491a128709272b1cfa9158135ee74c390c29140
731c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721932837246367341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701,PodSandboxId:03a9a6ab5aade1fc1af62c6256887fc877017fe136ea48b02342a37e82fa123f,M
etadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721932837182736565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7,PodSandboxId:b07c978dc42542fbbc162bd99638e1e7451082e469130
e8ce6c9814c2856c7be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721932837240657633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7,PodSandboxId:7896b81d6bc00828e1984a82d4e84df4a658bb49e490b3a08ca
552132bcb82db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721932836955140651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0,PodSandboxId:958f4c4a8ca24f4693c68ccfcdeedb0d496db41aeb6ceefe2921cd90841b9a47,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721932836862374599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3434ae30-d4b3-4962-8446-576f0e73df02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.889386206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25391d74-38aa-4d9b-bf4a-f1f046873b6c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.889483361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25391d74-38aa-4d9b-bf4a-f1f046873b6c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.890771980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92410940-0567-4b48-a312-cd33792899b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.891148861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932863891129704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92410940-0567-4b48-a312-cd33792899b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.895032453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ba345f2-2d7f-4c4c-8f96-5bde66f9ecc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.895110333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ba345f2-2d7f-4c4c-8f96-5bde66f9ecc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.895484473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8044dc5d21e6e80d20d7fc11609a0eb1fec3dd4a77bfc087da54be6b5d0beac1,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860670884466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29da01ce2614d8fbef1a642482143cf59dbedeb7dae25b8a79700dc8f90fe94,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860658692248,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff953de54dfbd31948adc3901b9b2381f896668aafd90241337e4a551ebda031,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721932860648846433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd0e6811653e15136a66328b0fb41feab6f12d7c48c3c941831bec7bb6bef9e,PodSandboxId:4663e1ae5e2ca8c825ab40d542094e102406d0dfb1e8032ed57923036c6ca93a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721932856814392047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853f77882e1ebd61301433669347f0535b74e855150b6ea2f08dc0871917fecc,PodSandboxId:fd32379ea3fa867eb2f5503451e6f79fe8d98929c657045587810847d0aa8f7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,Cr
eatedAt:1721932856782104283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7332a3834e66b0181becf9638dd3df46a1247850d26f4984ac4254b501f834b6,PodSandboxId:78224e4ffb343002bb690b0d0576b6c5e6ccfb1a1541223adb24fb36f3e5abce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721932
856802326819,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d330a47a9b25cafdb42c277fecb833c1da1b83c1373693c523366466d78f08d8,PodSandboxId:79f32705de3ca5ec6e927be3f026ebda30ccd8ffb0928b681d6c587d17e86a1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721932856758596451,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f82e7e3314f2bc6a2375454459faac4324929f66a8011cbb8edb80e465d5a,PodSandboxId:da39d2114245c51e61d4c07bac7a3abc816750b17b253b3a735aa0514a4e6c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172193285224800
5873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721932839851023079,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839615045947,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839346000600,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff,PodSandboxId:2a01630d1aca229dcb07e491a128709272b1cfa9158135ee74c390c29140
731c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721932837246367341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701,PodSandboxId:03a9a6ab5aade1fc1af62c6256887fc877017fe136ea48b02342a37e82fa123f,M
etadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721932837182736565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7,PodSandboxId:b07c978dc42542fbbc162bd99638e1e7451082e469130
e8ce6c9814c2856c7be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721932837240657633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7,PodSandboxId:7896b81d6bc00828e1984a82d4e84df4a658bb49e490b3a08ca
552132bcb82db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721932836955140651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0,PodSandboxId:958f4c4a8ca24f4693c68ccfcdeedb0d496db41aeb6ceefe2921cd90841b9a47,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721932836862374599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ba345f2-2d7f-4c4c-8f96-5bde66f9ecc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.936528745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75c3a449-5bdd-459a-8b72-c75f5aa54d2c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.936662368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75c3a449-5bdd-459a-8b72-c75f5aa54d2c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.937659650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9839e4fd-7689-4b1c-924a-ed71042bff90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.938147667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932863938121440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9839e4fd-7689-4b1c-924a-ed71042bff90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.938651522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1940deca-d5c5-4aec-9506-b470a491add4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.938721371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1940deca-d5c5-4aec-9506-b470a491add4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:41:03 kubernetes-upgrade-069209 crio[2914]: time="2024-07-25 18:41:03.939062130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8044dc5d21e6e80d20d7fc11609a0eb1fec3dd4a77bfc087da54be6b5d0beac1,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860670884466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29da01ce2614d8fbef1a642482143cf59dbedeb7dae25b8a79700dc8f90fe94,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932860658692248,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff953de54dfbd31948adc3901b9b2381f896668aafd90241337e4a551ebda031,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721932860648846433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd0e6811653e15136a66328b0fb41feab6f12d7c48c3c941831bec7bb6bef9e,PodSandboxId:4663e1ae5e2ca8c825ab40d542094e102406d0dfb1e8032ed57923036c6ca93a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721932856814392047,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:853f77882e1ebd61301433669347f0535b74e855150b6ea2f08dc0871917fecc,PodSandboxId:fd32379ea3fa867eb2f5503451e6f79fe8d98929c657045587810847d0aa8f7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,Cr
eatedAt:1721932856782104283,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7332a3834e66b0181becf9638dd3df46a1247850d26f4984ac4254b501f834b6,PodSandboxId:78224e4ffb343002bb690b0d0576b6c5e6ccfb1a1541223adb24fb36f3e5abce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721932
856802326819,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d330a47a9b25cafdb42c277fecb833c1da1b83c1373693c523366466d78f08d8,PodSandboxId:79f32705de3ca5ec6e927be3f026ebda30ccd8ffb0928b681d6c587d17e86a1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721932856758596451,Labe
ls:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6f82e7e3314f2bc6a2375454459faac4324929f66a8011cbb8edb80e465d5a,PodSandboxId:da39d2114245c51e61d4c07bac7a3abc816750b17b253b3a735aa0514a4e6c75,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:172193285224800
5873,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2,PodSandboxId:174264da35db529ab64c5042a5febdcae947f9e7a7f8ef519f2977dd46eba309,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721932839851023079,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d223bd0f-e8eb-4586-9a22-d4a23d9cb659,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951,PodSandboxId:f0e0f2d06affa077bcf58049810c30519cb502d6cc0c786d72506c0e17e8c8f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839615045947,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sws55,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d85a384-bad8-4007-a082-fd0a9b2d9893,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900,PodSandboxId:3bab1dd6aa54ed3ac3d8feadc390dabcace6e028c40ee8b56c25e97d68884023,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932839346000600,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gbrmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17800b2d-cae0-4137-a15a-8edd4861b3a6,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff,PodSandboxId:2a01630d1aca229dcb07e491a128709272b1cfa9158135ee74c390c29140
731c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721932837246367341,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c48d715c88b6d0fbb43ec5eee853b392,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701,PodSandboxId:03a9a6ab5aade1fc1af62c6256887fc877017fe136ea48b02342a37e82fa123f,M
etadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721932837182736565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1d742115a4469722e6778f43b7ac8a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7,PodSandboxId:b07c978dc42542fbbc162bd99638e1e7451082e469130
e8ce6c9814c2856c7be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721932837240657633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13c8bc07df1ca7050cc93b3835ef46bf,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7,PodSandboxId:7896b81d6bc00828e1984a82d4e84df4a658bb49e490b3a08ca
552132bcb82db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721932836955140651,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hl75b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f8b9483-0286-4adc-a8e2-8be2de41d547,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0,PodSandboxId:958f4c4a8ca24f4693c68ccfcdeedb0d496db41aeb6ceefe2921cd90841b9a47,Metadata:&ContainerM
etadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721932836862374599,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-069209,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28938e380bab9f01a42b2e07713060cf,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1940deca-d5c5-4aec-9506-b470a491add4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8044dc5d21e6e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   3bab1dd6aa54e       coredns-5cfdc65f69-gbrmq
	b29da01ce2614       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   f0e0f2d06affa       coredns-5cfdc65f69-sws55
	ff953de54dfbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   174264da35db5       storage-provisioner
	7dd0e6811653e       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   4663e1ae5e2ca       kube-apiserver-kubernetes-upgrade-069209
	7332a3834e66b       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   78224e4ffb343       etcd-kubernetes-upgrade-069209
	853f77882e1eb       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   fd32379ea3fa8       kube-scheduler-kubernetes-upgrade-069209
	d330a47a9b25c       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   79f32705de3ca       kube-controller-manager-kubernetes-upgrade-069209
	bc6f82e7e3314       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   11 seconds ago      Running             kube-proxy                2                   da39d2114245c       kube-proxy-hl75b
	cd61debac04aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   24 seconds ago      Exited              storage-provisioner       2                   174264da35db5       storage-provisioner
	b3039941fb78a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   f0e0f2d06affa       coredns-5cfdc65f69-sws55
	904977d068276       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   3bab1dd6aa54e       coredns-5cfdc65f69-gbrmq
	76aa9a22f7e2d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   26 seconds ago      Exited              kube-scheduler            1                   2a01630d1aca2       kube-scheduler-kubernetes-upgrade-069209
	09a59bbc549ce       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   26 seconds ago      Exited              kube-apiserver            1                   b07c978dc4254       kube-apiserver-kubernetes-upgrade-069209
	b42e2fd64ce7d       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   26 seconds ago      Exited              kube-controller-manager   1                   03a9a6ab5aade       kube-controller-manager-kubernetes-upgrade-069209
	33daaa061ae69       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   27 seconds ago      Exited              kube-proxy                1                   7896b81d6bc00       kube-proxy-hl75b
	ddd067899ec39       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   27 seconds ago      Exited              etcd                      1                   958f4c4a8ca24       etcd-kubernetes-upgrade-069209
	
	
	==> coredns [8044dc5d21e6e80d20d7fc11609a0eb1fec3dd4a77bfc087da54be6b5d0beac1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b29da01ce2614d8fbef1a642482143cf59dbedeb7dae25b8a79700dc8f90fe94] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-069209
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-069209
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:39:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-069209
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:40:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:41:00 +0000   Thu, 25 Jul 2024 18:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:41:00 +0000   Thu, 25 Jul 2024 18:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:41:00 +0000   Thu, 25 Jul 2024 18:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:41:00 +0000   Thu, 25 Jul 2024 18:39:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.165
	  Hostname:    kubernetes-upgrade-069209
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb3659d3f8424a639898d5ee1ccb9eda
	  System UUID:                fb3659d3-f842-4a63-9898-d5ee1ccb9eda
	  Boot ID:                    15626f02-945a-42aa-bc25-0b0b91d3ea1e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-gbrmq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 coredns-5cfdc65f69-sws55                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-kubernetes-upgrade-069209                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kube-apiserver-kubernetes-upgrade-069209             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-069209    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-hl75b                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-kubernetes-upgrade-069209             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s (x8 over 81s)  kubelet          Node kubernetes-upgrade-069209 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 81s)  kubelet          Node kubernetes-upgrade-069209 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 81s)  kubelet          Node kubernetes-upgrade-069209 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                node-controller  Node kubernetes-upgrade-069209 event: Registered Node kubernetes-upgrade-069209 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-069209 event: Registered Node kubernetes-upgrade-069209 in Controller
	
	
	==> dmesg <==
	[  +1.550125] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.513298] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.056705] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062420] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.164125] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.176108] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.381523] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +3.972148] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +1.715500] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.062716] kauditd_printk_skb: 158 callbacks suppressed
	[ +12.112687] systemd-fstab-generator[1252]: Ignoring "noauto" option for root device
	[  +0.077064] kauditd_printk_skb: 69 callbacks suppressed
	[Jul25 18:40] kauditd_printk_skb: 104 callbacks suppressed
	[  +7.348589] systemd-fstab-generator[2265]: Ignoring "noauto" option for root device
	[  +0.149664] systemd-fstab-generator[2277]: Ignoring "noauto" option for root device
	[  +0.176988] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +0.171131] systemd-fstab-generator[2303]: Ignoring "noauto" option for root device
	[  +0.933498] systemd-fstab-generator[2739]: Ignoring "noauto" option for root device
	[  +1.271237] systemd-fstab-generator[3159]: Ignoring "noauto" option for root device
	[  +2.445249] kauditd_printk_skb: 291 callbacks suppressed
	[ +15.116267] systemd-fstab-generator[4004]: Ignoring "noauto" option for root device
	[Jul25 18:41] systemd-fstab-generator[4446]: Ignoring "noauto" option for root device
	[  +0.096481] kauditd_printk_skb: 52 callbacks suppressed
	
	
	==> etcd [7332a3834e66b0181becf9638dd3df46a1247850d26f4984ac4254b501f834b6] <==
	{"level":"info","ts":"2024-07-25T18:40:57.162283Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.165:2380"}
	{"level":"info","ts":"2024-07-25T18:40:57.162513Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9a51797c3140749b","initial-advertise-peer-urls":["https://192.168.50.165:2380"],"listen-peer-urls":["https://192.168.50.165:2380"],"advertise-client-urls":["https://192.168.50.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:40:57.162529Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:40:57.163745Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:40:57.164052Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:40:57.163851Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4efceb46dfe38217","local-member-id":"9a51797c3140749b","added-peer-id":"9a51797c3140749b","added-peer-peer-urls":["https://192.168.50.165:2380"]}
	{"level":"info","ts":"2024-07-25T18:40:57.1644Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4efceb46dfe38217","local-member-id":"9a51797c3140749b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:40:57.164485Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:40:57.163982Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.165:2380"}
	{"level":"info","ts":"2024-07-25T18:40:58.443941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:40:58.443995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:40:58.444034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b received MsgPreVoteResp from 9a51797c3140749b at term 2"}
	{"level":"info","ts":"2024-07-25T18:40:58.444049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:40:58.44406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b received MsgVoteResp from 9a51797c3140749b at term 3"}
	{"level":"info","ts":"2024-07-25T18:40:58.444068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:40:58.444076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9a51797c3140749b elected leader 9a51797c3140749b at term 3"}
	{"level":"info","ts":"2024-07-25T18:40:58.451376Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9a51797c3140749b","local-member-attributes":"{Name:kubernetes-upgrade-069209 ClientURLs:[https://192.168.50.165:2379]}","request-path":"/0/members/9a51797c3140749b/attributes","cluster-id":"4efceb46dfe38217","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:40:58.451441Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:40:58.452023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:40:58.452119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:40:58.452159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:40:58.453098Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:40:58.453154Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:40:58.455658Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:40:58.456371Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.165:2379"}
	
	
	==> etcd [ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0] <==
	{"level":"info","ts":"2024-07-25T18:40:37.26551Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-25T18:40:37.284147Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"4efceb46dfe38217","local-member-id":"9a51797c3140749b","commit-index":417}
	{"level":"info","ts":"2024-07-25T18:40:37.28435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-25T18:40:37.284428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b became follower at term 2"}
	{"level":"info","ts":"2024-07-25T18:40:37.284445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9a51797c3140749b [peers: [], term: 2, commit: 417, applied: 0, lastindex: 417, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-25T18:40:37.292657Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-25T18:40:37.30795Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":403}
	{"level":"info","ts":"2024-07-25T18:40:37.329685Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-25T18:40:37.332732Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9a51797c3140749b","timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:40:37.332986Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9a51797c3140749b"}
	{"level":"info","ts":"2024-07-25T18:40:37.333018Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"9a51797c3140749b","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-25T18:40:37.333506Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:40:37.333707Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-25T18:40:37.333844Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:40:37.333917Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:40:37.33393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:40:37.334161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9a51797c3140749b switched to configuration voters=(11119802529263678619)"}
	{"level":"info","ts":"2024-07-25T18:40:37.334205Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4efceb46dfe38217","local-member-id":"9a51797c3140749b","added-peer-id":"9a51797c3140749b","added-peer-peer-urls":["https://192.168.50.165:2380"]}
	{"level":"info","ts":"2024-07-25T18:40:37.334281Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4efceb46dfe38217","local-member-id":"9a51797c3140749b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:40:37.334306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:40:37.344187Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:40:37.344343Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9a51797c3140749b","initial-advertise-peer-urls":["https://192.168.50.165:2380"],"listen-peer-urls":["https://192.168.50.165:2380"],"advertise-client-urls":["https://192.168.50.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:40:37.344363Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:40:37.344507Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.165:2380"}
	{"level":"info","ts":"2024-07-25T18:40:37.344514Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.165:2380"}
	
	
	==> kernel <==
	 18:41:04 up 1 min,  0 users,  load average: 1.08, 0.40, 0.15
	Linux kubernetes-upgrade-069209 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7] <==
	
	
	==> kube-apiserver [7dd0e6811653e15136a66328b0fb41feab6f12d7c48c3c941831bec7bb6bef9e] <==
	I0725 18:40:59.905077       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0725 18:41:00.008017       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 18:41:00.014032       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 18:41:00.014069       1 policy_source.go:224] refreshing policies
	I0725 18:41:00.018294       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:41:00.050643       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:41:00.051060       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 18:41:00.051200       1 aggregator.go:171] initial CRD sync complete...
	I0725 18:41:00.051228       1 autoregister_controller.go:144] Starting autoregister controller
	I0725 18:41:00.051236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 18:41:00.051242       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:41:00.057305       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:41:00.059888       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 18:41:00.059946       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 18:41:00.059978       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0725 18:41:00.059984       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0725 18:41:00.071594       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0725 18:41:00.912231       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:41:01.600733       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 18:41:01.613705       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 18:41:01.660091       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 18:41:01.779373       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:41:01.786175       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:41:02.810404       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 18:41:04.458317       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701] <==
	
	
	==> kube-controller-manager [d330a47a9b25cafdb42c277fecb833c1da1b83c1373693c523366466d78f08d8] <==
	I0725 18:41:03.808227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="624.126µs"
	I0725 18:41:03.845623       1 shared_informer.go:320] Caches are synced for deployment
	I0725 18:41:04.013128       1 shared_informer.go:320] Caches are synced for stateful set
	I0725 18:41:04.013213       1 shared_informer.go:320] Caches are synced for PVC protection
	I0725 18:41:04.088671       1 shared_informer.go:320] Caches are synced for expand
	I0725 18:41:04.095137       1 shared_informer.go:320] Caches are synced for service account
	I0725 18:41:04.098469       1 shared_informer.go:320] Caches are synced for ephemeral
	I0725 18:41:04.105931       1 shared_informer.go:320] Caches are synced for namespace
	I0725 18:41:04.249167       1 shared_informer.go:320] Caches are synced for endpoint
	I0725 18:41:04.346138       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0725 18:41:04.346173       1 shared_informer.go:320] Caches are synced for crt configmap
	I0725 18:41:04.350805       1 shared_informer.go:320] Caches are synced for cronjob
	I0725 18:41:04.386688       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0725 18:41:04.402532       1 shared_informer.go:320] Caches are synced for disruption
	I0725 18:41:04.446733       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0725 18:41:04.446821       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-069209"
	I0725 18:41:04.478295       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:41:04.495624       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0725 18:41:04.495836       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:41:04.495865       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0725 18:41:04.497523       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:41:04.497639       1 shared_informer.go:320] Caches are synced for attach detach
	I0725 18:41:04.503184       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:41:04.503961       1 shared_informer.go:320] Caches are synced for persistent volume
	I0725 18:41:04.517528       1 shared_informer.go:320] Caches are synced for PV protection
	
	
	==> kube-proxy [33daaa061ae694dede503a90c7a7c84b0e525a52ab8840a785c5f1f3c2d1d7d7] <==
	
	
	==> kube-proxy [bc6f82e7e3314f2bc6a2375454459faac4324929f66a8011cbb8edb80e465d5a] <==
	E0725 18:40:52.393154       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0725 18:40:52.394966       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-069209\": dial tcp 192.168.50.165:8443: connect: connection refused"
	E0725 18:40:53.404630       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-069209\": dial tcp 192.168.50.165:8443: connect: connection refused"
	E0725 18:40:55.504347       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-069209\": dial tcp 192.168.50.165:8443: connect: connection refused"
	I0725 18:41:00.026514       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.165"]
	E0725 18:41:00.027828       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0725 18:41:00.105649       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0725 18:41:00.105815       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:41:00.105870       1 server_linux.go:170] "Using iptables Proxier"
	I0725 18:41:00.109002       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0725 18:41:00.109423       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0725 18:41:00.109449       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:41:00.114118       1 config.go:197] "Starting service config controller"
	I0725 18:41:00.114148       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:41:00.114244       1 config.go:104] "Starting endpoint slice config controller"
	I0725 18:41:00.114315       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:41:00.115712       1 config.go:326] "Starting node config controller"
	I0725 18:41:00.115750       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:41:00.215315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:41:00.215452       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:41:00.216044       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff] <==
	
	
	==> kube-scheduler [853f77882e1ebd61301433669347f0535b74e855150b6ea2f08dc0871917fecc] <==
	I0725 18:40:58.760512       1 serving.go:386] Generated self-signed cert in-memory
	I0725 18:41:00.059670       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0725 18:41:00.059807       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:41:00.065106       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:41:00.065221       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0725 18:41:00.065253       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 18:41:00.065304       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0725 18:41:00.068465       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:41:00.068535       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:41:00.068625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0725 18:41:00.068649       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:41:00.165722       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0725 18:41:00.168990       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:41:00.169140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.558914    4011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e1d742115a4469722e6778f43b7ac8a-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-069209\" (UID: \"0e1d742115a4469722e6778f43b7ac8a\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-069209"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.559147    4011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c48d715c88b6d0fbb43ec5eee853b392-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-069209\" (UID: \"c48d715c88b6d0fbb43ec5eee853b392\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-069209"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.639322    4011 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-069209"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: E0725 18:40:56.640332    4011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.165:8443: connect: connection refused" node="kubernetes-upgrade-069209"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.746677    4011 scope.go:117] "RemoveContainer" containerID="b42e2fd64ce7dda253933bcf5d884163e28a6e62a127c3d3709f76597f8fc701"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.756234    4011 scope.go:117] "RemoveContainer" containerID="76aa9a22f7e2df06b449518ff8fdce7480cd275db96a252edddb05d8c04ea0ff"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.763872    4011 scope.go:117] "RemoveContainer" containerID="09a59bbc549ce72bfdc30c68c6a88f36bfc69d06958ff8f9f776df5fbbec4eb7"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:56.764346    4011 scope.go:117] "RemoveContainer" containerID="ddd067899ec398bf7742217ec05330dad0e0e7cd243809c4977485ccd6a9b6a0"
	Jul 25 18:40:56 kubernetes-upgrade-069209 kubelet[4011]: E0725 18:40:56.939737    4011 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-069209?timeout=10s\": dial tcp 192.168.50.165:8443: connect: connection refused" interval="800ms"
	Jul 25 18:40:57 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:57.042513    4011 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-069209"
	Jul 25 18:40:57 kubernetes-upgrade-069209 kubelet[4011]: E0725 18:40:57.045195    4011 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.165:8443: connect: connection refused" node="kubernetes-upgrade-069209"
	Jul 25 18:40:57 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:40:57.847100    4011 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-069209"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.089886    4011 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-069209"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.090286    4011 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-069209"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.090355    4011 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.091437    4011 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.321423    4011 apiserver.go:52] "Watching apiserver"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.339059    4011 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.393584    4011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d223bd0f-e8eb-4586-9a22-d4a23d9cb659-tmp\") pod \"storage-provisioner\" (UID: \"d223bd0f-e8eb-4586-9a22-d4a23d9cb659\") " pod="kube-system/storage-provisioner"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.394067    4011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f8b9483-0286-4adc-a8e2-8be2de41d547-xtables-lock\") pod \"kube-proxy-hl75b\" (UID: \"5f8b9483-0286-4adc-a8e2-8be2de41d547\") " pod="kube-system/kube-proxy-hl75b"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.394116    4011 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f8b9483-0286-4adc-a8e2-8be2de41d547-lib-modules\") pod \"kube-proxy-hl75b\" (UID: \"5f8b9483-0286-4adc-a8e2-8be2de41d547\") " pod="kube-system/kube-proxy-hl75b"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: E0725 18:41:00.525459    4011 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-069209\" already exists" pod="kube-system/etcd-kubernetes-upgrade-069209"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.627757    4011 scope.go:117] "RemoveContainer" containerID="cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.629392    4011 scope.go:117] "RemoveContainer" containerID="904977d06827645b461766b9613f70e3f1bfd8cc17a629ee9a41fb3852fc5900"
	Jul 25 18:41:00 kubernetes-upgrade-069209 kubelet[4011]: I0725 18:41:00.630202    4011 scope.go:117] "RemoveContainer" containerID="b3039941fb78a1057fd657e08a0b6de5c6b962973e95fc09231c25cb1f443951"
	
	
	==> storage-provisioner [cd61debac04aa26a42e37de147d468a2f423c089b4e3bcd259d763da4c99d1c2] <==
	I0725 18:40:39.969344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:40:39.971317       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ff953de54dfbd31948adc3901b9b2381f896668aafd90241337e4a551ebda031] <==
	I0725 18:41:00.803812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:41:00.818371       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:41:00.818450       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:41:03.382852   56650 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19326-5877/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-069209 -n kubernetes-upgrade-069209
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-069209 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-069209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-069209
--- FAIL: TestKubernetesUpgrade (447.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-669817 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0725 18:36:58.590456   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-669817 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.134639859s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-669817] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-669817" primary control-plane node in "pause-669817" cluster
	* Updating the running kvm2 "pause-669817" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-669817" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:36:54.249664   50912 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:36:54.249959   50912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:36:54.249972   50912 out.go:304] Setting ErrFile to fd 2...
	I0725 18:36:54.249978   50912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:36:54.250288   50912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:36:54.251004   50912 out.go:298] Setting JSON to false
	I0725 18:36:54.252222   50912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4758,"bootTime":1721927856,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:36:54.252301   50912 start.go:139] virtualization: kvm guest
	I0725 18:36:54.254585   50912 out.go:177] * [pause-669817] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:36:54.255911   50912 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:36:54.255979   50912 notify.go:220] Checking for updates...
	I0725 18:36:54.258363   50912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:36:54.259629   50912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:36:54.260849   50912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:36:54.262043   50912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:36:54.263261   50912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:36:54.265109   50912 config.go:182] Loaded profile config "pause-669817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:36:54.265577   50912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:36:54.265645   50912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:36:54.284986   50912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0725 18:36:54.285389   50912 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:36:54.285950   50912 main.go:141] libmachine: Using API Version  1
	I0725 18:36:54.285965   50912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:36:54.286354   50912 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:36:54.286543   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:36:54.286828   50912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:36:54.287099   50912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:36:54.287134   50912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:36:54.302438   50912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I0725 18:36:54.302794   50912 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:36:54.303260   50912 main.go:141] libmachine: Using API Version  1
	I0725 18:36:54.303293   50912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:36:54.303694   50912 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:36:54.303928   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:36:54.340088   50912 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:36:54.341443   50912 start.go:297] selected driver: kvm2
	I0725 18:36:54.341462   50912 start.go:901] validating driver "kvm2" against &{Name:pause-669817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-669817 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:36:54.341606   50912 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:36:54.341942   50912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:36:54.342006   50912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:36:54.357817   50912 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:36:54.358835   50912 cni.go:84] Creating CNI manager for ""
	I0725 18:36:54.358855   50912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:36:54.358934   50912 start.go:340] cluster config:
	{Name:pause-669817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-669817 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:36:54.359160   50912 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:36:54.360988   50912 out.go:177] * Starting "pause-669817" primary control-plane node in "pause-669817" cluster
	I0725 18:36:54.362229   50912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:36:54.362281   50912 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:36:54.362293   50912 cache.go:56] Caching tarball of preloaded images
	I0725 18:36:54.362374   50912 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:36:54.362388   50912 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:36:54.362543   50912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/config.json ...
	I0725 18:36:54.362772   50912 start.go:360] acquireMachinesLock for pause-669817: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:36:58.516678   50912 start.go:364] duration metric: took 4.153846341s to acquireMachinesLock for "pause-669817"
	I0725 18:36:58.516736   50912 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:36:58.516745   50912 fix.go:54] fixHost starting: 
	I0725 18:36:58.517195   50912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:36:58.517254   50912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:36:58.536389   50912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
	I0725 18:36:58.536916   50912 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:36:58.537518   50912 main.go:141] libmachine: Using API Version  1
	I0725 18:36:58.537559   50912 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:36:58.537872   50912 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:36:58.538066   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:36:58.538207   50912 main.go:141] libmachine: (pause-669817) Calling .GetState
	I0725 18:36:58.539894   50912 fix.go:112] recreateIfNeeded on pause-669817: state=Running err=<nil>
	W0725 18:36:58.539915   50912 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:36:58.542079   50912 out.go:177] * Updating the running kvm2 "pause-669817" VM ...
	I0725 18:36:58.543249   50912 machine.go:94] provisionDockerMachine start ...
	I0725 18:36:58.543282   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:36:58.543469   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:58.545781   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.546200   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:58.546226   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.546342   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:36:58.546497   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.546674   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.546784   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:36:58.546937   50912 main.go:141] libmachine: Using SSH client type: native
	I0725 18:36:58.547124   50912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0725 18:36:58.547136   50912 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:36:58.660736   50912 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-669817
	
	I0725 18:36:58.660763   50912 main.go:141] libmachine: (pause-669817) Calling .GetMachineName
	I0725 18:36:58.661038   50912 buildroot.go:166] provisioning hostname "pause-669817"
	I0725 18:36:58.661069   50912 main.go:141] libmachine: (pause-669817) Calling .GetMachineName
	I0725 18:36:58.661256   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:58.663984   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.664408   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:58.664439   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.664572   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:36:58.664774   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.664930   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.665036   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:36:58.665185   50912 main.go:141] libmachine: Using SSH client type: native
	I0725 18:36:58.665343   50912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0725 18:36:58.665355   50912 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-669817 && echo "pause-669817" | sudo tee /etc/hostname
	I0725 18:36:58.785945   50912 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-669817
	
	I0725 18:36:58.785982   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:58.788989   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.789381   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:58.789404   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.789612   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:36:58.789789   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.789953   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:58.790113   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:36:58.790287   50912 main.go:141] libmachine: Using SSH client type: native
	I0725 18:36:58.790493   50912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0725 18:36:58.790511   50912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-669817' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-669817/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-669817' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:36:58.900974   50912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:36:58.901005   50912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:36:58.901026   50912 buildroot.go:174] setting up certificates
	I0725 18:36:58.901035   50912 provision.go:84] configureAuth start
	I0725 18:36:58.901043   50912 main.go:141] libmachine: (pause-669817) Calling .GetMachineName
	I0725 18:36:58.901341   50912 main.go:141] libmachine: (pause-669817) Calling .GetIP
	I0725 18:36:58.904082   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.904472   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:58.904500   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.904699   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:58.906881   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.907260   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:58.907288   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:58.907439   50912 provision.go:143] copyHostCerts
	I0725 18:36:58.907496   50912 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:36:58.907506   50912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:36:58.907560   50912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:36:58.907692   50912 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:36:58.907701   50912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:36:58.907721   50912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:36:58.907795   50912 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:36:58.907801   50912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:36:58.907819   50912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:36:58.907873   50912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.pause-669817 san=[127.0.0.1 192.168.61.203 localhost minikube pause-669817]
	I0725 18:36:59.262786   50912 provision.go:177] copyRemoteCerts
	I0725 18:36:59.262846   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:36:59.262868   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:59.265629   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:59.266092   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:59.266121   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:59.266329   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:36:59.266512   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:59.266702   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:36:59.266847   50912 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/pause-669817/id_rsa Username:docker}
	I0725 18:36:59.350330   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0725 18:36:59.373461   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:36:59.398634   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:36:59.421837   50912 provision.go:87] duration metric: took 520.792376ms to configureAuth
	I0725 18:36:59.421861   50912 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:36:59.422090   50912 config.go:182] Loaded profile config "pause-669817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:36:59.422182   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:36:59.425214   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:59.425667   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:36:59.425698   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:36:59.425866   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:36:59.426055   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:59.426316   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:36:59.426473   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:36:59.426650   50912 main.go:141] libmachine: Using SSH client type: native
	I0725 18:36:59.426819   50912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0725 18:36:59.426835   50912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:37:05.074958   50912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:37:05.074984   50912 machine.go:97] duration metric: took 6.531709351s to provisionDockerMachine
	I0725 18:37:05.074997   50912 start.go:293] postStartSetup for "pause-669817" (driver="kvm2")
	I0725 18:37:05.075011   50912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:37:05.075032   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:37:05.075457   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:37:05.075493   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:37:05.078647   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.079073   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:05.079096   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.079273   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:37:05.079491   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:37:05.079681   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:37:05.079848   50912 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/pause-669817/id_rsa Username:docker}
	I0725 18:37:05.177759   50912 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:37:05.183213   50912 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:37:05.183243   50912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:37:05.183324   50912 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:37:05.183448   50912 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:37:05.183590   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:37:05.195323   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:05.227674   50912 start.go:296] duration metric: took 152.660982ms for postStartSetup
	I0725 18:37:05.227717   50912 fix.go:56] duration metric: took 6.710972537s for fixHost
	I0725 18:37:05.227742   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:37:05.230404   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.230798   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:05.230825   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.231165   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:37:05.231396   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:37:05.231588   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:37:05.231877   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:37:05.232063   50912 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:05.232231   50912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.203 22 <nil> <nil>}
	I0725 18:37:05.232244   50912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 18:37:05.349996   50912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932625.334581238
	
	I0725 18:37:05.350025   50912 fix.go:216] guest clock: 1721932625.334581238
	I0725 18:37:05.350035   50912 fix.go:229] Guest: 2024-07-25 18:37:05.334581238 +0000 UTC Remote: 2024-07-25 18:37:05.227722099 +0000 UTC m=+11.019851573 (delta=106.859139ms)
	I0725 18:37:05.350084   50912 fix.go:200] guest clock delta is within tolerance: 106.859139ms
	I0725 18:37:05.350092   50912 start.go:83] releasing machines lock for "pause-669817", held for 6.833376867s
	I0725 18:37:05.350118   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:37:05.350436   50912 main.go:141] libmachine: (pause-669817) Calling .GetIP
	I0725 18:37:05.353628   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.354124   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:05.354158   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.354417   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:37:05.354980   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:37:05.355158   50912 main.go:141] libmachine: (pause-669817) Calling .DriverName
	I0725 18:37:05.355246   50912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:37:05.355295   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:37:05.355423   50912 ssh_runner.go:195] Run: cat /version.json
	I0725 18:37:05.355461   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHHostname
	I0725 18:37:05.358132   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.358429   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.358548   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:05.358627   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.358787   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:37:05.358931   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:37:05.359089   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:37:05.359242   50912 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/pause-669817/id_rsa Username:docker}
	I0725 18:37:05.359556   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHPort
	I0725 18:37:05.359124   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:05.359714   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:05.359731   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHKeyPath
	I0725 18:37:05.359876   50912 main.go:141] libmachine: (pause-669817) Calling .GetSSHUsername
	I0725 18:37:05.360018   50912 sshutil.go:53] new ssh client: &{IP:192.168.61.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/pause-669817/id_rsa Username:docker}
	I0725 18:37:05.480774   50912 ssh_runner.go:195] Run: systemctl --version
	I0725 18:37:05.487108   50912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:37:05.647870   50912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:37:05.655888   50912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:37:05.655957   50912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:37:05.666327   50912 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 18:37:05.666375   50912 start.go:495] detecting cgroup driver to use...
	I0725 18:37:05.666514   50912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:37:05.683160   50912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:37:05.696440   50912 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:37:05.696540   50912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:37:05.711489   50912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:37:05.725877   50912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:37:05.886537   50912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:37:06.047070   50912 docker.go:233] disabling docker service ...
	I0725 18:37:06.047174   50912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:37:06.068781   50912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:37:06.083534   50912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:37:06.239830   50912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:37:06.368941   50912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:37:06.386857   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:37:06.410278   50912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:37:06.410349   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.421501   50912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:37:06.421559   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.432161   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.443400   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.454195   50912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:37:06.465116   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.478822   50912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.499225   50912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:06.513430   50912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:37:06.525962   50912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:37:06.538492   50912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:06.689710   50912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:37:09.546888   50912 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.857086499s)
	I0725 18:37:09.546922   50912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:37:09.546997   50912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:37:09.559074   50912 start.go:563] Will wait 60s for crictl version
	I0725 18:37:09.559146   50912 ssh_runner.go:195] Run: which crictl
	I0725 18:37:09.575010   50912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:37:09.645841   50912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:37:09.645951   50912 ssh_runner.go:195] Run: crio --version
	I0725 18:37:09.704635   50912 ssh_runner.go:195] Run: crio --version
	I0725 18:37:09.829903   50912 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:37:09.918737   50912 main.go:141] libmachine: (pause-669817) Calling .GetIP
	I0725 18:37:09.922090   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:09.922437   50912 main.go:141] libmachine: (pause-669817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:09:31", ip: ""} in network mk-pause-669817: {Iface:virbr1 ExpiryTime:2024-07-25 19:35:29 +0000 UTC Type:0 Mac:52:54:00:7b:09:31 Iaid: IPaddr:192.168.61.203 Prefix:24 Hostname:pause-669817 Clientid:01:52:54:00:7b:09:31}
	I0725 18:37:09.922465   50912 main.go:141] libmachine: (pause-669817) DBG | domain pause-669817 has defined IP address 192.168.61.203 and MAC address 52:54:00:7b:09:31 in network mk-pause-669817
	I0725 18:37:09.922711   50912 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:37:09.944066   50912 kubeadm.go:883] updating cluster {Name:pause-669817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-669817 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:37:09.944235   50912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:37:09.944292   50912 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:10.006016   50912 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:37:10.006039   50912 crio.go:433] Images already preloaded, skipping extraction
	I0725 18:37:10.006100   50912 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:10.057657   50912 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:37:10.057683   50912 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:37:10.057692   50912 kubeadm.go:934] updating node { 192.168.61.203 8443 v1.30.3 crio true true} ...
	I0725 18:37:10.057814   50912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-669817 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-669817 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:37:10.057901   50912 ssh_runner.go:195] Run: crio config
	I0725 18:37:10.107800   50912 cni.go:84] Creating CNI manager for ""
	I0725 18:37:10.107826   50912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:10.107838   50912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:37:10.107865   50912 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.203 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-669817 NodeName:pause-669817 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:37:10.108076   50912 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-669817"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:37:10.108158   50912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:37:10.121570   50912 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:37:10.121657   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:37:10.135235   50912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0725 18:37:10.153521   50912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:37:10.170033   50912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0725 18:37:10.187220   50912 ssh_runner.go:195] Run: grep 192.168.61.203	control-plane.minikube.internal$ /etc/hosts
	I0725 18:37:10.191242   50912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:10.375482   50912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:10.452629   50912 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817 for IP: 192.168.61.203
	I0725 18:37:10.452655   50912 certs.go:194] generating shared ca certs ...
	I0725 18:37:10.452676   50912 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:10.452857   50912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:37:10.452925   50912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:37:10.452944   50912 certs.go:256] generating profile certs ...
	I0725 18:37:10.453041   50912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/client.key
	I0725 18:37:10.453114   50912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/apiserver.key.f4243edc
	I0725 18:37:10.453167   50912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/proxy-client.key
	I0725 18:37:10.453301   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:37:10.453345   50912 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:37:10.453358   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:37:10.453390   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:37:10.453422   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:37:10.453452   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:37:10.453499   50912 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:10.454297   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:37:10.561276   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:37:10.737043   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:37:10.877687   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:37:10.977235   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 18:37:11.013651   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:37:11.045618   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:37:11.090278   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:37:11.167342   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:37:11.199358   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:37:11.241990   50912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:37:11.276661   50912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:37:11.299155   50912 ssh_runner.go:195] Run: openssl version
	I0725 18:37:11.304778   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:37:11.323438   50912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:37:11.329970   50912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:37:11.330045   50912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:37:11.335471   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:37:11.348495   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:37:11.360678   50912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:11.364902   50912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:11.364943   50912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:11.371846   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:37:11.382382   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:37:11.394535   50912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:37:11.398884   50912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:37:11.398942   50912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:37:11.405982   50912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:37:11.416391   50912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:37:11.422120   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:37:11.428379   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:37:11.434789   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:37:11.439838   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:37:11.445985   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:37:11.452109   50912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:37:11.459138   50912 kubeadm.go:392] StartCluster: {Name:pause-669817 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-669817 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:37:11.459280   50912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:37:11.459332   50912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:37:11.514707   50912 cri.go:89] found id: "0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442"
	I0725 18:37:11.514731   50912 cri.go:89] found id: "ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03"
	I0725 18:37:11.514736   50912 cri.go:89] found id: "c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258"
	I0725 18:37:11.514741   50912 cri.go:89] found id: "3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed"
	I0725 18:37:11.514745   50912 cri.go:89] found id: "5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7"
	I0725 18:37:11.514750   50912 cri.go:89] found id: "910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8"
	I0725 18:37:11.514754   50912 cri.go:89] found id: "9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1"
	I0725 18:37:11.514759   50912 cri.go:89] found id: "705b92bc54aa08201294e3ad0a9ec7e2f02880a086eda11d1a58dae73d4b13ed"
	I0725 18:37:11.514763   50912 cri.go:89] found id: "519afe169828797418671c396000076f8634c6b49a2981a7c7516f12e89c80e1"
	I0725 18:37:11.514770   50912 cri.go:89] found id: "2b6e0cfec3b4c6dea1f9d4dc34c4d7b7cf4b728f1365ca8a4abf03365602b28e"
	I0725 18:37:11.514774   50912 cri.go:89] found id: "abd981f0e09b9f6367b33ca9138d711db4fbe049eeb639769ae14a20a321477a"
	I0725 18:37:11.514788   50912 cri.go:89] found id: "f4e057b383c5b27de595a9cb12630c202609d011edc6e0560eb965028f7aa5a6"
	I0725 18:37:11.514795   50912 cri.go:89] found id: ""
	I0725 18:37:11.514846   50912 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-669817 -n pause-669817
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-669817 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-669817 logs -n 25: (1.464261571s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-567197       | scheduled-stop-567197     | jenkins | v1.33.1 | 25 Jul 24 18:32 UTC | 25 Jul 24 18:33 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-567197       | scheduled-stop-567197     | jenkins | v1.33.1 | 25 Jul 24 18:33 UTC | 25 Jul 24 18:33 UTC |
	| start   | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:33 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-872594         | offline-crio-872594       | jenkins | v1.33.1 | 25 Jul 24 18:33 UTC | 25 Jul 24 18:35 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-069209   | kubernetes-upgrade-069209 | jenkins | v1.33.1 | 25 Jul 24 18:33 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:33 UTC | 25 Jul 24 18:35 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-919785      | minikube                  | jenkins | v1.26.0 | 25 Jul 24 18:33 UTC | 25 Jul 24 18:35 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:35 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-872594         | offline-crio-872594       | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:35 UTC |
	| delete  | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:35 UTC |
	| start   | -p pause-669817 --memory=2048  | pause-669817              | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:36 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:36 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-919785      | running-upgrade-919785    | jenkins | v1.33.1 | 25 Jul 24 18:35 UTC | 25 Jul 24 18:37 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-896524 sudo    | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:36 UTC |
	| start   | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:36 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-896524 sudo    | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-896524         | NoKubernetes-896524       | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:36 UTC |
	| start   | -p stopped-upgrade-160946      | minikube                  | jenkins | v1.26.0 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-669817                | pause-669817              | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:37 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-919785      | running-upgrade-919785    | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	| start   | -p force-systemd-env-207395    | force-systemd-env-207395  | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-160946 stop    | minikube                  | jenkins | v1.26.0 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	| start   | -p stopped-upgrade-160946      | stopped-upgrade-160946    | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-207395    | force-systemd-env-207395  | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:37:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:37:26.879428   51390 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:37:26.879559   51390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:37:26.879568   51390 out.go:304] Setting ErrFile to fd 2...
	I0725 18:37:26.879574   51390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:37:26.879765   51390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:37:26.880308   51390 out.go:298] Setting JSON to false
	I0725 18:37:26.881354   51390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4791,"bootTime":1721927856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:37:26.881414   51390 start.go:139] virtualization: kvm guest
	I0725 18:37:26.883822   51390 out.go:177] * [stopped-upgrade-160946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:37:26.885322   51390 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:37:26.885360   51390 notify.go:220] Checking for updates...
	I0725 18:37:26.887941   51390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:37:26.889142   51390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:26.890333   51390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:37:26.891697   51390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:37:26.893323   51390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:37:26.895412   51390 config.go:182] Loaded profile config "stopped-upgrade-160946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0725 18:37:26.895907   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:26.895956   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:26.915999   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0725 18:37:26.916354   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:26.916883   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:26.916910   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:26.917232   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:26.917413   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:26.919288   51390 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 18:37:26.920622   51390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:37:26.920978   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:26.921021   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:26.936666   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0725 18:37:26.937109   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:26.937664   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:26.937691   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:26.938002   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:26.938277   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:26.979948   51390 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:37:26.981247   51390 start.go:297] selected driver: kvm2
	I0725 18:37:26.981266   51390 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-160946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-160
946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 18:37:26.981401   51390 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:37:26.982355   51390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:37:26.982450   51390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:37:26.997198   51390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:37:26.997579   51390 cni.go:84] Creating CNI manager for ""
	I0725 18:37:26.997594   51390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:26.997651   51390 start.go:340] cluster config:
	{Name:stopped-upgrade-160946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-160946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 18:37:26.997776   51390 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:37:26.999506   51390 out.go:177] * Starting "stopped-upgrade-160946" primary control-plane node in "stopped-upgrade-160946" cluster
	I0725 18:37:26.652534   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:26.652830   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:25.869270   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:25.869759   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:25.869788   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:25.869687   51181 retry.go:31] will retry after 2.011529099s: waiting for machine to come up
	I0725 18:37:27.882532   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:27.883005   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:27.883030   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:27.882971   51181 retry.go:31] will retry after 2.958130035s: waiting for machine to come up
	I0725 18:37:27.000740   51390 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0725 18:37:27.000786   51390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0725 18:37:27.000795   51390 cache.go:56] Caching tarball of preloaded images
	I0725 18:37:27.000864   51390 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:37:27.000874   51390 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0725 18:37:27.000966   51390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/stopped-upgrade-160946/config.json ...
	I0725 18:37:27.001163   51390 start.go:360] acquireMachinesLock for stopped-upgrade-160946: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:37:32.216067   50912 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442 ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03 c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258 3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed 5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7 910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 705b92bc54aa08201294e3ad0a9ec7e2f02880a086eda11d1a58dae73d4b13ed 519afe169828797418671c396000076f8634c6b49a2981a7c7516f12e89c80e1 2b6e0cfec3b4c6dea1f9d4dc34c4d7b7cf4b728f1365ca8a4abf03365602b28e abd981f0e09b9f6367b33ca9138d711db4fbe049eeb639769ae14a20a321477a f4e057b383c5b27de595a9cb12630c202609d011edc6e0560eb965028f7aa5a6: (20.535092336s)
	W0725 18:37:32.216145   50912 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442 ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03 c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258 3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed 5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7 910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 705b92bc54aa08201294e3ad0a9ec7e2f02880a086eda11d1a58dae73d4b13ed 519afe169828797418671c396000076f8634c6b49a2981a7c7516f12e89c80e1 2b6e0cfec3b4c6dea1f9d4dc34c4d7b7cf4b728f1365ca8a4abf03365602b28e abd981f0e09b9f6367b33ca9138d711db4fbe049eeb639769ae14a20a321477a f4e057b383c5b27de595a9cb12630c202609d011edc6e0560eb965028f7aa5a6: Process exited with status 1
	stdout:
	0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442
	ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03
	c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258
	3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed
	5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7
	910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8
	
	stderr:
	E0725 18:37:32.198472    3199 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": container with ID starting with 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 not found: ID does not exist" containerID="9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1"
	time="2024-07-25T18:37:32Z" level=fatal msg="stopping the container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": rpc error: code = NotFound desc = could not find container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": container with ID starting with 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 not found: ID does not exist"
	I0725 18:37:32.216196   50912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:37:32.255733   50912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:37:32.265718   50912 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 25 18:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 25 18:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 25 18:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 25 18:35 /etc/kubernetes/scheduler.conf
	
	I0725 18:37:32.265773   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:37:32.274483   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:37:32.282807   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:37:32.290907   50912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:37:32.290972   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:37:32.299298   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:37:32.307270   50912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:37:32.307315   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:37:32.315632   50912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:37:32.323889   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:32.379132   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.013824   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.214772   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.281103   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.362915   50912 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:33.362993   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:33.864017   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:30.842495   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:30.842952   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:30.842975   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:30.842891   51181 retry.go:31] will retry after 4.306112991s: waiting for machine to come up
	I0725 18:37:36.612945   51390 start.go:364] duration metric: took 9.611724038s to acquireMachinesLock for "stopped-upgrade-160946"
	I0725 18:37:36.613018   51390 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:37:36.613029   51390 fix.go:54] fixHost starting: 
	I0725 18:37:36.613457   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:36.613511   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:36.631531   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0725 18:37:36.632018   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:36.632593   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:36.632617   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:36.632952   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:36.633143   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:36.633309   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetState
	I0725 18:37:36.634837   51390 fix.go:112] recreateIfNeeded on stopped-upgrade-160946: state=Stopped err=<nil>
	I0725 18:37:36.634860   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	W0725 18:37:36.634999   51390 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:37:36.637119   51390 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-160946" ...
	I0725 18:37:36.638524   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .Start
	I0725 18:37:36.638728   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring networks are active...
	I0725 18:37:36.639534   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring network default is active
	I0725 18:37:36.639921   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring network mk-stopped-upgrade-160946 is active
	I0725 18:37:36.640342   51390 main.go:141] libmachine: (stopped-upgrade-160946) Getting domain xml...
	I0725 18:37:36.641152   51390 main.go:141] libmachine: (stopped-upgrade-160946) Creating domain...
	I0725 18:37:35.154527   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.155009   51158 main.go:141] libmachine: (force-systemd-env-207395) Found IP for machine: 192.168.72.213
	I0725 18:37:35.155035   51158 main.go:141] libmachine: (force-systemd-env-207395) Reserving static IP address...
	I0725 18:37:35.155049   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has current primary IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.155454   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-207395", mac: "52:54:00:5d:0f:d7", ip: "192.168.72.213"} in network mk-force-systemd-env-207395
	I0725 18:37:35.228956   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Getting to WaitForSSH function...
	I0725 18:37:35.228989   51158 main.go:141] libmachine: (force-systemd-env-207395) Reserved static IP address: 192.168.72.213
	I0725 18:37:35.229003   51158 main.go:141] libmachine: (force-systemd-env-207395) Waiting for SSH to be available...
	I0725 18:37:35.231631   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.232117   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.232140   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.232277   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using SSH client type: external
	I0725 18:37:35.232299   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa (-rw-------)
	I0725 18:37:35.232375   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:37:35.232396   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | About to run SSH command:
	I0725 18:37:35.232413   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | exit 0
	I0725 18:37:35.356671   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | SSH cmd err, output: <nil>: 
	I0725 18:37:35.357009   51158 main.go:141] libmachine: (force-systemd-env-207395) KVM machine creation complete!
	I0725 18:37:35.357317   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetConfigRaw
	I0725 18:37:35.358013   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:35.358214   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:35.358421   51158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:37:35.358439   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:35.360001   51158 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:37:35.360018   51158 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:37:35.360026   51158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:37:35.360035   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.362702   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.363147   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.363186   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.363314   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.363517   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.363694   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.363835   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.364051   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.364408   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.364427   51158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:37:35.467507   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:37:35.467530   51158 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:37:35.467541   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.470207   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.470657   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.470695   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.470848   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.471003   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.471170   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.471335   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.471512   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.471689   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.471702   51158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:37:35.580693   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:37:35.580752   51158 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:37:35.580759   51158 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:37:35.580768   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.581044   51158 buildroot.go:166] provisioning hostname "force-systemd-env-207395"
	I0725 18:37:35.581066   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.581280   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.584006   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.584403   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.584434   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.584598   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.584796   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.584965   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.585093   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.585309   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.585533   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.585556   51158 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-207395 && echo "force-systemd-env-207395" | sudo tee /etc/hostname
	I0725 18:37:35.706662   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-207395
	
	I0725 18:37:35.706700   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.709663   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.710074   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.710107   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.710330   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.710560   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.710732   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.710889   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.711091   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.711303   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.711329   51158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-207395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-207395/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-207395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:37:35.824397   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:37:35.824429   51158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:37:35.824451   51158 buildroot.go:174] setting up certificates
	I0725 18:37:35.824484   51158 provision.go:84] configureAuth start
	I0725 18:37:35.824500   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.824778   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:35.827492   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.827989   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.828025   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.828188   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.830514   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.830850   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.830879   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.830977   51158 provision.go:143] copyHostCerts
	I0725 18:37:35.831006   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:37:35.831042   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:37:35.831062   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:37:35.831129   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:37:35.831244   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:37:35.831273   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:37:35.831279   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:37:35.831353   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:37:35.831461   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:37:35.831481   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:37:35.831485   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:37:35.831520   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:37:35.831602   51158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-207395 san=[127.0.0.1 192.168.72.213 force-systemd-env-207395 localhost minikube]
	I0725 18:37:35.930422   51158 provision.go:177] copyRemoteCerts
	I0725 18:37:35.930483   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:37:35.930505   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.933395   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.933731   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.933761   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.933947   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.934123   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.934244   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.934361   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.018074   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 18:37:36.018152   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:37:36.042953   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 18:37:36.043026   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 18:37:36.068853   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 18:37:36.068934   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:37:36.093609   51158 provision.go:87] duration metric: took 269.109544ms to configureAuth
	I0725 18:37:36.093637   51158 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:37:36.093841   51158 config.go:182] Loaded profile config "force-systemd-env-207395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:36.093915   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.096740   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.097099   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.097130   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.097284   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.097495   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.097652   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.097835   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.097970   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:36.098135   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:36.098148   51158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:37:36.372553   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:37:36.372582   51158 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:37:36.372594   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetURL
	I0725 18:37:36.373947   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using libvirt version 6000000
	I0725 18:37:36.376516   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.376893   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.376914   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.377130   51158 main.go:141] libmachine: Docker is up and running!
	I0725 18:37:36.377142   51158 main.go:141] libmachine: Reticulating splines...
	I0725 18:37:36.377148   51158 client.go:171] duration metric: took 21.156697171s to LocalClient.Create
	I0725 18:37:36.377166   51158 start.go:167] duration metric: took 21.15675888s to libmachine.API.Create "force-systemd-env-207395"
	I0725 18:37:36.377175   51158 start.go:293] postStartSetup for "force-systemd-env-207395" (driver="kvm2")
	I0725 18:37:36.377185   51158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:37:36.377201   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.377387   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:37:36.377407   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.379588   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.379897   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.379929   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.380078   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.380273   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.380442   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.380587   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.465651   51158 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:37:36.470404   51158 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:37:36.470435   51158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:37:36.470505   51158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:37:36.470601   51158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:37:36.470612   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 18:37:36.470693   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:37:36.479530   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:36.501772   51158 start.go:296] duration metric: took 124.583545ms for postStartSetup
	I0725 18:37:36.501832   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetConfigRaw
	I0725 18:37:36.502439   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:36.505291   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.505695   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.505723   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.505966   51158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/config.json ...
	I0725 18:37:36.506128   51158 start.go:128] duration metric: took 21.30342098s to createHost
	I0725 18:37:36.506150   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.508241   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.508655   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.508697   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.508754   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.508948   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.509111   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.509302   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.509476   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:36.509649   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:36.509666   51158 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:37:36.612772   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932656.587664722
	
	I0725 18:37:36.612794   51158 fix.go:216] guest clock: 1721932656.587664722
	I0725 18:37:36.612804   51158 fix.go:229] Guest: 2024-07-25 18:37:36.587664722 +0000 UTC Remote: 2024-07-25 18:37:36.506139556 +0000 UTC m=+21.426247860 (delta=81.525166ms)
	I0725 18:37:36.612827   51158 fix.go:200] guest clock delta is within tolerance: 81.525166ms
	I0725 18:37:36.612833   51158 start.go:83] releasing machines lock for "force-systemd-env-207395", held for 21.410232895s
	I0725 18:37:36.612863   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.613120   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:36.616079   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.616477   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.616520   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.616680   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617215   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617437   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617552   51158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:37:36.617608   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.617648   51158 ssh_runner.go:195] Run: cat /version.json
	I0725 18:37:36.617675   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.620631   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621623   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.621663   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621683   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621951   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.622157   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.622163   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.622197   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.622256   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.622353   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.622435   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.622587   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.622599   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.622691   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.729608   51158 ssh_runner.go:195] Run: systemctl --version
	I0725 18:37:36.736844   51158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:37:36.897908   51158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:37:36.907655   51158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:37:36.907733   51158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:37:36.926819   51158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:37:36.926849   51158 start.go:495] detecting cgroup driver to use...
	I0725 18:37:36.926869   51158 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0725 18:37:36.926922   51158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:37:36.945203   51158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:37:36.961321   51158 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:37:36.961395   51158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:37:36.979396   51158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:37:36.995540   51158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:37:37.133097   51158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:37:37.286124   51158 docker.go:233] disabling docker service ...
	I0725 18:37:37.286201   51158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:37:37.305141   51158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:37:37.323346   51158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:37:37.472041   51158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:37:37.605641   51158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:37:37.619013   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:37:37.635734   51158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:37:37.635800   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.645868   51158 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0725 18:37:37.645941   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.656853   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.667507   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.681663   51158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:37:37.696044   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.710259   51158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.732363   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.745912   51158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:37:37.758312   51158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:37:37.758386   51158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:37:37.773040   51158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:37:37.784057   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:37.945371   51158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:37:38.091653   51158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:37:38.091739   51158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:37:38.096974   51158 start.go:563] Will wait 60s for crictl version
	I0725 18:37:38.097055   51158 ssh_runner.go:195] Run: which crictl
	I0725 18:37:38.101593   51158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:37:38.149371   51158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:37:38.149463   51158 ssh_runner.go:195] Run: crio --version
	I0725 18:37:38.183854   51158 ssh_runner.go:195] Run: crio --version
	I0725 18:37:38.221940   51158 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:37:34.363464   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:34.378086   50912 api_server.go:72] duration metric: took 1.015171096s to wait for apiserver process to appear ...
	I0725 18:37:34.378114   50912 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:34.378135   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.102786   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:37:37.102819   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:37:37.102834   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.131736   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:37:37.131772   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:37:37.379217   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.384876   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:37:37.384908   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:37:37.878529   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.884844   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:37:37.884873   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:37:38.378346   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:38.383766   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I0725 18:37:38.390454   50912 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:38.390484   50912 api_server.go:131] duration metric: took 4.012362885s to wait for apiserver health ...
	I0725 18:37:38.390495   50912 cni.go:84] Creating CNI manager for ""
	I0725 18:37:38.390503   50912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:38.392270   50912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:37:38.393562   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:37:38.406023   50912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:37:38.424867   50912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:38.439009   50912 system_pods.go:59] 6 kube-system pods found
	I0725 18:37:38.439048   50912 system_pods.go:61] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:37:38.439060   50912 system_pods.go:61] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:37:38.439075   50912 system_pods.go:61] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:37:38.439089   50912 system_pods.go:61] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:37:38.439100   50912 system_pods.go:61] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:37:38.439109   50912 system_pods.go:61] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:37:38.439121   50912 system_pods.go:74] duration metric: took 14.231502ms to wait for pod list to return data ...
	I0725 18:37:38.439134   50912 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:38.443801   50912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:38.443839   50912 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:38.443855   50912 node_conditions.go:105] duration metric: took 4.714582ms to run NodePressure ...
	I0725 18:37:38.443879   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:38.745063   50912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:37:38.750473   50912 kubeadm.go:739] kubelet initialised
	I0725 18:37:38.750492   50912 kubeadm.go:740] duration metric: took 5.402046ms waiting for restarted kubelet to initialise ...
	I0725 18:37:38.750500   50912 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:38.762530   50912 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:38.223338   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:38.226325   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:38.226705   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:38.226735   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:38.226963   51158 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:37:38.231001   51158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:37:38.244019   51158 kubeadm.go:883] updating cluster {Name:force-systemd-env-207395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:37:38.244142   51158 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:37:38.244207   51158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:38.284013   51158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:37:38.284081   51158 ssh_runner.go:195] Run: which lz4
	I0725 18:37:38.287855   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0725 18:37:38.287962   51158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:37:38.292420   51158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:37:38.292455   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:37:39.563096   51158 crio.go:462] duration metric: took 1.275163257s to copy over tarball
	I0725 18:37:39.563183   51158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:37:37.992221   51390 main.go:141] libmachine: (stopped-upgrade-160946) Waiting to get IP...
	I0725 18:37:37.993236   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:37.993688   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:37.993795   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:37.993687   51490 retry.go:31] will retry after 226.658501ms: waiting for machine to come up
	I0725 18:37:38.222415   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.223036   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.223064   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.222983   51490 retry.go:31] will retry after 273.378812ms: waiting for machine to come up
	I0725 18:37:38.498623   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.499101   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.499140   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.499058   51490 retry.go:31] will retry after 468.694129ms: waiting for machine to come up
	I0725 18:37:38.969952   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.970539   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.970564   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.970456   51490 retry.go:31] will retry after 523.855417ms: waiting for machine to come up
	I0725 18:37:39.496987   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:39.497615   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:39.497639   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:39.497572   51490 retry.go:31] will retry after 569.232898ms: waiting for machine to come up
	I0725 18:37:40.068462   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:40.069070   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:40.069129   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:40.069047   51490 retry.go:31] will retry after 646.366469ms: waiting for machine to come up
	I0725 18:37:40.716926   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:40.717443   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:40.717473   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:40.717397   51490 retry.go:31] will retry after 1.049207488s: waiting for machine to come up
	I0725 18:37:41.767965   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:41.768467   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:41.768500   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:41.768431   51490 retry.go:31] will retry after 988.54089ms: waiting for machine to come up
	I0725 18:37:39.769758   50912 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:39.769788   50912 pod_ready.go:81] duration metric: took 1.007226195s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:39.769800   50912 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:41.777724   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:41.807856   51158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.244642165s)
	I0725 18:37:41.807914   51158 crio.go:469] duration metric: took 2.244786211s to extract the tarball
	I0725 18:37:41.807925   51158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:37:41.844752   51158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:41.890282   51158 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:37:41.890336   51158 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:37:41.890347   51158 kubeadm.go:934] updating node { 192.168.72.213 8443 v1.30.3 crio true true} ...
	I0725 18:37:41.890519   51158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-207395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:37:41.890604   51158 ssh_runner.go:195] Run: crio config
	I0725 18:37:41.941566   51158 cni.go:84] Creating CNI manager for ""
	I0725 18:37:41.941589   51158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:41.941598   51158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:37:41.941616   51158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-207395 NodeName:force-systemd-env-207395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:37:41.941750   51158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-207395"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:37:41.941807   51158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:37:41.951481   51158 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:37:41.951542   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:37:41.960748   51158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0725 18:37:41.976481   51158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:37:41.991765   51158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0725 18:37:42.007359   51158 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0725 18:37:42.011429   51158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:37:42.022719   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:42.140432   51158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:42.156977   51158 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395 for IP: 192.168.72.213
	I0725 18:37:42.157001   51158 certs.go:194] generating shared ca certs ...
	I0725 18:37:42.157034   51158 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.157269   51158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:37:42.157421   51158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:37:42.157440   51158 certs.go:256] generating profile certs ...
	I0725 18:37:42.157505   51158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key
	I0725 18:37:42.157519   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt with IP's: []
	I0725 18:37:42.302585   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt ...
	I0725 18:37:42.302613   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt: {Name:mk16be5c27c4cc6a0c88bb557b296ce31c7b5c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.302809   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key ...
	I0725 18:37:42.302826   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key: {Name:mkf4b82c57f53278e53d1e5096d1d42f0ac3abcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.302933   51158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c
	I0725 18:37:42.302958   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.213]
	I0725 18:37:42.452149   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c ...
	I0725 18:37:42.452178   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c: {Name:mke5e225d144aa993adea72f83b5ee090f705175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.452370   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c ...
	I0725 18:37:42.452390   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c: {Name:mk98b47279530bde8608b2babcc2d4a7e6997db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.452489   51158 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt
	I0725 18:37:42.452580   51158 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key
	I0725 18:37:42.452652   51158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key
	I0725 18:37:42.452672   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt with IP's: []
	I0725 18:37:42.653655   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt ...
	I0725 18:37:42.653685   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt: {Name:mk9dad70ca3c57f542f8832d0919be12950e9cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.653869   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key ...
	I0725 18:37:42.653896   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key: {Name:mk2f958654552c6ee67dc797e5e213774d84faa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.654004   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 18:37:42.654031   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 18:37:42.654050   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 18:37:42.654069   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 18:37:42.654086   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 18:37:42.654104   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 18:37:42.654121   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 18:37:42.654144   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 18:37:42.654210   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:37:42.654255   51158 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:37:42.654268   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:37:42.654309   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:37:42.654339   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:37:42.654364   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:37:42.654423   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:42.654466   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.654502   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.654527   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 18:37:42.655146   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:37:42.679476   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:37:42.701810   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:37:42.723515   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:37:42.744893   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 18:37:42.768275   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:37:42.791863   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:37:42.813264   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:37:42.835192   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:37:42.857209   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:37:42.881874   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:37:42.905690   51158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:37:42.926069   51158 ssh_runner.go:195] Run: openssl version
	I0725 18:37:42.944001   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:37:42.958630   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.964601   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.964680   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.970556   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:37:42.983587   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:37:42.995121   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.999400   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.999464   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:43.005473   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:37:43.016006   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:37:43.027325   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.031568   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.031617   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.037051   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:37:43.047294   51158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:37:43.051239   51158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:37:43.051298   51158 kubeadm.go:392] StartCluster: {Name:force-systemd-env-207395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:37:43.051387   51158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:37:43.051481   51158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:37:43.088191   51158 cri.go:89] found id: ""
	I0725 18:37:43.088279   51158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:37:43.097882   51158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:37:43.111589   51158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:37:43.125586   51158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:37:43.125603   51158 kubeadm.go:157] found existing configuration files:
	
	I0725 18:37:43.125652   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:37:43.135488   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:37:43.135555   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:37:43.145341   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:37:43.154987   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:37:43.155061   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:37:43.164441   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:37:43.174530   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:37:43.174600   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:37:43.184867   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:37:43.194080   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:37:43.194151   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:37:43.203635   51158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:37:43.433075   51158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:37:42.758771   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:42.759309   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:42.759341   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:42.759250   51490 retry.go:31] will retry after 1.591539118s: waiting for machine to come up
	I0725 18:37:44.352514   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:44.353082   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:44.353110   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:44.353034   51490 retry.go:31] will retry after 1.605092008s: waiting for machine to come up
	I0725 18:37:45.959813   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:45.960239   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:45.960260   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:45.960219   51490 retry.go:31] will retry after 1.977540708s: waiting for machine to come up
	I0725 18:37:44.279259   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:46.776746   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:48.776872   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:46.651787   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:46.652100   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:47.939560   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:47.940071   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:47.940104   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:47.940015   51490 retry.go:31] will retry after 3.270081065s: waiting for machine to come up
	I0725 18:37:51.214315   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:51.214766   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:51.214820   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:51.214704   51490 retry.go:31] will retry after 3.806476269s: waiting for machine to come up
	I0725 18:37:50.276650   50912 pod_ready.go:92] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:50.276678   50912 pod_ready.go:81] duration metric: took 10.506869395s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:50.276692   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:51.283526   50912 pod_ready.go:92] pod "kube-apiserver-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:51.283552   50912 pod_ready.go:81] duration metric: took 1.006847646s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:51.283565   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.291717   50912 pod_ready.go:92] pod "kube-controller-manager-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.291738   50912 pod_ready.go:81] duration metric: took 2.008166733s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.291747   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.301877   50912 pod_ready.go:92] pod "kube-proxy-m4njw" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.301898   50912 pod_ready.go:81] duration metric: took 10.144877ms for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.301907   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.307052   50912 pod_ready.go:92] pod "kube-scheduler-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.307069   50912 pod_ready.go:81] duration metric: took 5.156035ms for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.307077   50912 pod_ready.go:38] duration metric: took 14.556568018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:53.307096   50912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:37:53.323687   50912 ops.go:34] apiserver oom_adj: -16
	I0725 18:37:53.323714   50912 kubeadm.go:597] duration metric: took 41.72887376s to restartPrimaryControlPlane
	I0725 18:37:53.323724   50912 kubeadm.go:394] duration metric: took 41.864609202s to StartCluster
	I0725 18:37:53.323743   50912 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:53.323815   50912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:53.326326   50912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:53.326607   50912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:37:53.326714   50912 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:37:53.326852   50912 config.go:182] Loaded profile config "pause-669817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:53.329022   50912 out.go:177] * Verifying Kubernetes components...
	I0725 18:37:53.329024   50912 out.go:177] * Enabled addons: 
	I0725 18:37:54.215952   51158 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 18:37:54.216027   51158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:37:54.216169   51158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:37:54.216294   51158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:37:54.216396   51158 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:37:54.216451   51158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:37:54.217956   51158 out.go:204]   - Generating certificates and keys ...
	I0725 18:37:54.218022   51158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:37:54.218092   51158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:37:54.218181   51158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:37:54.218258   51158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:37:54.218358   51158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:37:54.218425   51158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:37:54.218474   51158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:37:54.218648   51158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-207395 localhost] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0725 18:37:54.218723   51158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:37:54.218908   51158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-207395 localhost] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0725 18:37:54.219015   51158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:37:54.219117   51158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:37:54.219176   51158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:37:54.219236   51158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:37:54.219282   51158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:37:54.219343   51158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 18:37:54.219390   51158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:37:54.219444   51158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:37:54.219489   51158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:37:54.219580   51158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:37:54.219648   51158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:37:54.221485   51158 out.go:204]   - Booting up control plane ...
	I0725 18:37:54.221590   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:37:54.221706   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:37:54.221792   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:37:54.221945   51158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:37:54.222061   51158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:37:54.222112   51158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:37:54.222263   51158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 18:37:54.222353   51158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 18:37:54.222429   51158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.734134ms
	I0725 18:37:54.222533   51158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 18:37:54.222617   51158 kubeadm.go:310] [api-check] The API server is healthy after 6.001493838s
	I0725 18:37:54.222728   51158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 18:37:54.222843   51158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 18:37:54.222893   51158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 18:37:54.223092   51158 kubeadm.go:310] [mark-control-plane] Marking the node force-systemd-env-207395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 18:37:54.223155   51158 kubeadm.go:310] [bootstrap-token] Using token: w7ppv0.pn4hoefyzgyx4icy
	I0725 18:37:54.224446   51158 out.go:204]   - Configuring RBAC rules ...
	I0725 18:37:54.224570   51158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 18:37:54.224682   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 18:37:54.224849   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 18:37:54.225036   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 18:37:54.225189   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 18:37:54.225298   51158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 18:37:54.225454   51158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 18:37:54.225518   51158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 18:37:54.225567   51158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 18:37:54.225573   51158 kubeadm.go:310] 
	I0725 18:37:54.225623   51158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 18:37:54.225627   51158 kubeadm.go:310] 
	I0725 18:37:54.225711   51158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 18:37:54.225719   51158 kubeadm.go:310] 
	I0725 18:37:54.225751   51158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 18:37:54.225843   51158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 18:37:54.225918   51158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 18:37:54.225926   51158 kubeadm.go:310] 
	I0725 18:37:54.225988   51158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 18:37:54.225998   51158 kubeadm.go:310] 
	I0725 18:37:54.226058   51158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 18:37:54.226068   51158 kubeadm.go:310] 
	I0725 18:37:54.226142   51158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 18:37:54.226245   51158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 18:37:54.226364   51158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 18:37:54.226377   51158 kubeadm.go:310] 
	I0725 18:37:54.226496   51158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 18:37:54.226602   51158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 18:37:54.226614   51158 kubeadm.go:310] 
	I0725 18:37:54.226716   51158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w7ppv0.pn4hoefyzgyx4icy \
	I0725 18:37:54.226848   51158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 18:37:54.226880   51158 kubeadm.go:310] 	--control-plane 
	I0725 18:37:54.226888   51158 kubeadm.go:310] 
	I0725 18:37:54.226992   51158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 18:37:54.227004   51158 kubeadm.go:310] 
	I0725 18:37:54.227104   51158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w7ppv0.pn4hoefyzgyx4icy \
	I0725 18:37:54.227243   51158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 18:37:54.227259   51158 cni.go:84] Creating CNI manager for ""
	I0725 18:37:54.227268   51158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:54.228705   51158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:37:53.330242   50912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:53.330237   50912 addons.go:510] duration metric: took 3.528591ms for enable addons: enabled=[]
	I0725 18:37:53.493527   50912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:53.509187   50912 node_ready.go:35] waiting up to 6m0s for node "pause-669817" to be "Ready" ...
	I0725 18:37:53.512033   50912 node_ready.go:49] node "pause-669817" has status "Ready":"True"
	I0725 18:37:53.512055   50912 node_ready.go:38] duration metric: took 2.833884ms for node "pause-669817" to be "Ready" ...
	I0725 18:37:53.512065   50912 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:53.517519   50912 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.522252   50912 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.522283   50912 pod_ready.go:81] duration metric: took 4.73509ms for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.522295   50912 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.874217   50912 pod_ready.go:92] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.874241   50912 pod_ready.go:81] duration metric: took 351.938362ms for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.874256   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.230023   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:37:54.242701   51158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:37:54.259749   51158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:37:54.259881   51158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:37:54.259895   51158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-207395 minikube.k8s.io/updated_at=2024_07_25T18_37_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=force-systemd-env-207395 minikube.k8s.io/primary=true
	I0725 18:37:54.284240   51158 ops.go:34] apiserver oom_adj: -16
	I0725 18:37:54.456998   51158 kubeadm.go:1113] duration metric: took 197.181529ms to wait for elevateKubeSystemPrivileges
	I0725 18:37:54.457032   51158 kubeadm.go:394] duration metric: took 11.405739314s to StartCluster
	I0725 18:37:54.457064   51158 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:54.457162   51158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:54.458530   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:54.458839   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 18:37:54.458871   51158 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:37:54.458923   51158 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:37:54.459011   51158 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-env-207395"
	I0725 18:37:54.459042   51158 addons.go:234] Setting addon storage-provisioner=true in "force-systemd-env-207395"
	I0725 18:37:54.459045   51158 addons.go:69] Setting default-storageclass=true in profile "force-systemd-env-207395"
	I0725 18:37:54.459117   51158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-207395"
	I0725 18:37:54.459058   51158 config.go:182] Loaded profile config "force-systemd-env-207395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:54.459071   51158 host.go:66] Checking if "force-systemd-env-207395" exists ...
	I0725 18:37:54.459550   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.459577   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.459655   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.459696   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.460351   51158 out.go:177] * Verifying Kubernetes components...
	I0725 18:37:54.461638   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:54.475310   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I0725 18:37:54.475851   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.476458   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.476480   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.477052   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.477266   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.479113   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0725 18:37:54.479564   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.479977   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:54.480099   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.480124   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.480417   51158 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 18:37:54.480481   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.480678   51158 addons.go:234] Setting addon default-storageclass=true in "force-systemd-env-207395"
	I0725 18:37:54.480720   51158 host.go:66] Checking if "force-systemd-env-207395" exists ...
	I0725 18:37:54.481042   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.481091   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.481178   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.481204   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.495515   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0725 18:37:54.495848   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.496278   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.496299   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.496606   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.496765   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.498455   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:54.499897   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0725 18:37:54.500246   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.500749   51158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:37:54.500805   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.500845   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.501198   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.501676   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.501712   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.501965   51158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:37:54.501997   51158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:37:54.502016   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:54.505369   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.505839   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:54.505868   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.506130   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:54.506294   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:54.506475   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:54.506610   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:54.517594   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
	I0725 18:37:54.517959   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.518387   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.518407   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.518705   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.518909   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.520642   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:54.520864   51158 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:37:54.520880   51158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:37:54.520899   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:54.524034   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.524549   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:54.524580   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.524732   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:54.524937   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:54.525084   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:54.525240   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:54.621164   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 18:37:54.674389   51158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:54.790449   51158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:37:54.899405   51158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:37:55.002948   51158 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0725 18:37:55.003081   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.003100   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.003374   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.003395   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.003405   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.003414   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.003654   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.003668   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.003912   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:55.003933   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:55.004263   51158 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:55.004360   51158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:55.022309   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.022334   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.022762   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.022780   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.274638   51158 api_server.go:72] duration metric: took 815.723494ms to wait for apiserver process to appear ...
	I0725 18:37:55.274665   51158 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:55.274692   51158 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0725 18:37:55.274787   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.274809   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.275078   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.275097   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.275108   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.275117   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.275372   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.275386   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.276940   51158 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 18:37:55.278241   51158 addons.go:510] duration metric: took 819.314788ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 18:37:55.280618   51158 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0725 18:37:55.281703   51158 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:55.281724   51158 api_server.go:131] duration metric: took 7.052659ms to wait for apiserver health ...
	I0725 18:37:55.281732   51158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:55.289303   51158 system_pods.go:59] 5 kube-system pods found
	I0725 18:37:55.289342   51158 system_pods.go:61] "etcd-force-systemd-env-207395" [d4b528b3-2e0e-49ee-80ae-5c3105998951] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:37:55.289356   51158 system_pods.go:61] "kube-apiserver-force-systemd-env-207395" [26d68a04-6a4a-4d12-b4f4-695ec1b18105] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:37:55.289365   51158 system_pods.go:61] "kube-controller-manager-force-systemd-env-207395" [f34e647e-16a2-4a54-8545-c8a097cb04e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:37:55.289377   51158 system_pods.go:61] "kube-scheduler-force-systemd-env-207395" [26db469f-4c3d-4e1d-99d4-c3bef5370fa1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:37:55.289381   51158 system_pods.go:61] "storage-provisioner" [0afd08cc-3170-432c-b215-b394a5459a44] Pending
	I0725 18:37:55.289391   51158 system_pods.go:74] duration metric: took 7.65278ms to wait for pod list to return data ...
	I0725 18:37:55.289402   51158 kubeadm.go:582] duration metric: took 830.494999ms to wait for: map[apiserver:true system_pods:true]
	I0725 18:37:55.289417   51158 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:55.295247   51158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:55.295273   51158 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:55.295283   51158 node_conditions.go:105] duration metric: took 5.861744ms to run NodePressure ...
	I0725 18:37:55.295293   51158 start.go:241] waiting for startup goroutines ...
	I0725 18:37:55.507515   51158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-env-207395" context rescaled to 1 replicas
	I0725 18:37:55.507551   51158 start.go:246] waiting for cluster config update ...
	I0725 18:37:55.507561   51158 start.go:255] writing updated cluster config ...
	I0725 18:37:55.507851   51158 ssh_runner.go:195] Run: rm -f paused
	I0725 18:37:55.558671   51158 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:37:55.560874   51158 out.go:177] * Done! kubectl is now configured to use "force-systemd-env-207395" cluster and "default" namespace by default
	I0725 18:37:54.273490   50912 pod_ready.go:92] pod "kube-apiserver-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:54.273512   50912 pod_ready.go:81] duration metric: took 399.248197ms for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.273525   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.673467   50912 pod_ready.go:92] pod "kube-controller-manager-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:54.673503   50912 pod_ready.go:81] duration metric: took 399.964582ms for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.673517   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.074566   50912 pod_ready.go:92] pod "kube-proxy-m4njw" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:55.074592   50912 pod_ready.go:81] duration metric: took 401.06764ms for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.074605   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.477021   50912 pod_ready.go:92] pod "kube-scheduler-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:55.477046   50912 pod_ready.go:81] duration metric: took 402.433989ms for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.477054   50912 pod_ready.go:38] duration metric: took 1.964977117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:55.477068   50912 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:55.477118   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:55.491228   50912 api_server.go:72] duration metric: took 2.164581843s to wait for apiserver process to appear ...
	I0725 18:37:55.491260   50912 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:55.491282   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:55.499030   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I0725 18:37:55.500226   50912 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:55.500247   50912 api_server.go:131] duration metric: took 8.980564ms to wait for apiserver health ...
	I0725 18:37:55.500256   50912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:55.676584   50912 system_pods.go:59] 6 kube-system pods found
	I0725 18:37:55.676615   50912 system_pods.go:61] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running
	I0725 18:37:55.676622   50912 system_pods.go:61] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running
	I0725 18:37:55.676627   50912 system_pods.go:61] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running
	I0725 18:37:55.676633   50912 system_pods.go:61] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running
	I0725 18:37:55.676640   50912 system_pods.go:61] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running
	I0725 18:37:55.676649   50912 system_pods.go:61] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running
	I0725 18:37:55.676658   50912 system_pods.go:74] duration metric: took 176.39439ms to wait for pod list to return data ...
	I0725 18:37:55.676670   50912 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:37:55.875233   50912 default_sa.go:45] found service account: "default"
	I0725 18:37:55.875257   50912 default_sa.go:55] duration metric: took 198.575701ms for default service account to be created ...
	I0725 18:37:55.875269   50912 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:37:56.089282   50912 system_pods.go:86] 6 kube-system pods found
	I0725 18:37:56.089310   50912 system_pods.go:89] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running
	I0725 18:37:56.089318   50912 system_pods.go:89] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running
	I0725 18:37:56.089323   50912 system_pods.go:89] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running
	I0725 18:37:56.089330   50912 system_pods.go:89] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running
	I0725 18:37:56.089336   50912 system_pods.go:89] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running
	I0725 18:37:56.089341   50912 system_pods.go:89] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running
	I0725 18:37:56.089349   50912 system_pods.go:126] duration metric: took 214.073398ms to wait for k8s-apps to be running ...
	I0725 18:37:56.089359   50912 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:37:56.089408   50912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:37:56.103426   50912 system_svc.go:56] duration metric: took 14.05623ms WaitForService to wait for kubelet
	I0725 18:37:56.103459   50912 kubeadm.go:582] duration metric: took 2.776818378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:37:56.103478   50912 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:56.274232   50912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:56.274260   50912 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:56.274274   50912 node_conditions.go:105] duration metric: took 170.790378ms to run NodePressure ...
	I0725 18:37:56.274288   50912 start.go:241] waiting for startup goroutines ...
	I0725 18:37:56.274298   50912 start.go:246] waiting for cluster config update ...
	I0725 18:37:56.274308   50912 start.go:255] writing updated cluster config ...
	I0725 18:37:56.274668   50912 ssh_runner.go:195] Run: rm -f paused
	I0725 18:37:56.321524   50912 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:37:56.324945   50912 out.go:177] * Done! kubectl is now configured to use "pause-669817" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.040090018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932677039998995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11240f56-53db-4f3b-8502-77c0798f4fa0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.040554609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=696a5e6c-326a-4e6b-8750-8723f62d45ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.040627622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=696a5e6c-326a-4e6b-8750-8723f62d45ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.040883812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=696a5e6c-326a-4e6b-8750-8723f62d45ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.088271656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de821c66-2bb6-4232-9bc6-e039eaf011ea name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.088367816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de821c66-2bb6-4232-9bc6-e039eaf011ea name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.097584164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba60476e-2cca-4158-a2e8-d4248794e681 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.098221233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932677098181501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba60476e-2cca-4158-a2e8-d4248794e681 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.098897951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7136c2de-9703-43f5-82d8-90c6d4cceadc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.098973988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7136c2de-9703-43f5-82d8-90c6d4cceadc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.099312112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7136c2de-9703-43f5-82d8-90c6d4cceadc name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.141810306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34522143-f66e-4477-a03b-ca0548207330 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.141889035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34522143-f66e-4477-a03b-ca0548207330 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.143319615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=917555e3-1b03-4c4d-ba0e-f7e50b8ce9dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.143700055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932677143677853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=917555e3-1b03-4c4d-ba0e-f7e50b8ce9dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.144746613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e01059ac-3506-41f1-980b-6e66e56eb7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.144952357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e01059ac-3506-41f1-980b-6e66e56eb7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.145306418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e01059ac-3506-41f1-980b-6e66e56eb7c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.193423791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a18f3075-b9eb-4071-86b2-581d78631d2d name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.193535396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a18f3075-b9eb-4071-86b2-581d78631d2d name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.195376631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d26eecb7-3fa2-4e71-ba9f-977979abb24f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.195987192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932677195959834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d26eecb7-3fa2-4e71-ba9f-977979abb24f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.196845725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cb7379c-fde7-4d94-a26b-56e5e63b3a04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.196941615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cb7379c-fde7-4d94-a26b-56e5e63b3a04 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:57 pause-669817 crio[2487]: time="2024-07-25 18:37:57.197331516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cb7379c-fde7-4d94-a26b-56e5e63b3a04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	62d2ab96b8ce3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   42743769d8937       coredns-7db6d8ff4d-jn9l2
	46918ce287ec6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                2                   be0c7bb84692c       kube-proxy-m4njw
	d656436d68b25       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            2                   7353b0f7fbe56       kube-scheduler-pause-669817
	297d035de80b8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago      Running             kube-apiserver            2                   39effd372b6ae       kube-apiserver-pause-669817
	7ad8f7e807241       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago      Running             kube-controller-manager   2                   7a70bdf419db3       kube-controller-manager-pause-669817
	fe591f55bbac2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   2ca7513c4a109       etcd-pause-669817
	0b2ab4d0d15a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago      Exited              coredns                   1                   42743769d8937       coredns-7db6d8ff4d-jn9l2
	ec10b979248ce       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   46 seconds ago      Exited              kube-proxy                1                   be0c7bb84692c       kube-proxy-m4njw
	c1e504cf40eba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   46 seconds ago      Exited              etcd                      1                   2ca7513c4a109       etcd-pause-669817
	3e6bd9a4d3c0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   46 seconds ago      Exited              kube-scheduler            1                   7353b0f7fbe56       kube-scheduler-pause-669817
	5c7008d55b151       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   46 seconds ago      Exited              kube-apiserver            1                   39effd372b6ae       kube-apiserver-pause-669817
	910591d676800       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   46 seconds ago      Exited              kube-controller-manager   1                   7a70bdf419db3       kube-controller-manager-pause-669817
	
	
	==> coredns [0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35335 - 29942 "HINFO IN 4362430812324189062.9196917897299404625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016876368s
	
	
	==> coredns [62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60185 - 4305 "HINFO IN 8691005056273813146.3009638441941347370. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006472656s
	
	
	==> describe nodes <==
	Name:               pause-669817
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-669817
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=pause-669817
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_35_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:35:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-669817
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:37:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-669817
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 55c8c8190391493fb96119b8228073ce
	  System UUID:                55c8c819-0391-493f-b961-19b8228073ce
	  Boot ID:                    d397889d-837e-4880-88b5-0554ffd041a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jn9l2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-669817                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-669817             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-669817    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-m4njw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-pause-669817             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node pause-669817 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node pause-669817 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node pause-669817 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeReady                119s               kubelet          Node pause-669817 status is now: NodeReady
	  Normal  RegisteredNode           106s               node-controller  Node pause-669817 event: Registered Node pause-669817 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-669817 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-669817 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-669817 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-669817 event: Registered Node pause-669817 in Controller
	
	
	==> dmesg <==
	[  +9.860959] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062323] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061236] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.178226] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.122893] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.273943] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.246460] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +5.166557] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.063515] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498860] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.085098] kauditd_printk_skb: 69 callbacks suppressed
	[Jul25 18:36] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +0.116665] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.924200] kauditd_printk_skb: 86 callbacks suppressed
	[Jul25 18:37] systemd-fstab-generator[2407]: Ignoring "noauto" option for root device
	[  +0.167475] systemd-fstab-generator[2419]: Ignoring "noauto" option for root device
	[  +0.183727] systemd-fstab-generator[2434]: Ignoring "noauto" option for root device
	[  +0.159811] systemd-fstab-generator[2445]: Ignoring "noauto" option for root device
	[  +0.284786] systemd-fstab-generator[2473]: Ignoring "noauto" option for root device
	[  +3.668695] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[  +0.698945] kauditd_printk_skb: 185 callbacks suppressed
	[ +10.906815] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.266972] systemd-fstab-generator[3462]: Ignoring "noauto" option for root device
	[ +16.885998] kauditd_printk_skb: 52 callbacks suppressed
	[  +3.371105] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	
	
	==> etcd [c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258] <==
	{"level":"info","ts":"2024-07-25T18:37:11.303934Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:12.204101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.204239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.20425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.20426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.206358Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-669817 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:37:12.206448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:12.206565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:12.209128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2024-07-25T18:37:12.210614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:37:12.212071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:12.21212Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:21.678599Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-25T18:37:21.678693Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-669817","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"]}
	{"level":"warn","ts":"2024-07-25T18:37:21.678797Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.678937Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.698491Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.698534Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T18:37:21.698601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3dce464254b32e20","current-leader-member-id":"3dce464254b32e20"}
	{"level":"info","ts":"2024-07-25T18:37:21.701996Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:21.702192Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:21.70222Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-669817","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"]}
	
	
	==> etcd [fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210] <==
	{"level":"info","ts":"2024-07-25T18:37:34.178093Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:37:34.178135Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:37:34.179171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2024-07-25T18:37:34.179247Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-07-25T18:37:34.179365Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:37:34.179409Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:37:34.186649Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:37:34.186885Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:37:34.186926Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:37:34.187495Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:34.187547Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:35.654097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.6542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.654246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.654315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.660959Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-669817 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:37:35.661166Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:35.661518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:35.663161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:37:35.663686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:35.663715Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:35.667929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	
	
	==> kernel <==
	 18:37:57 up 2 min,  0 users,  load average: 1.72, 0.71, 0.27
	Linux pause-669817 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f] <==
	I0725 18:37:37.160367       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 18:37:37.160512       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:37:37.162670       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:37:37.170289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 18:37:37.172046       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 18:37:37.172130       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 18:37:37.172140       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 18:37:37.172326       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0725 18:37:37.179180       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 18:37:37.185938       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 18:37:37.186052       1 aggregator.go:165] initial CRD sync complete...
	I0725 18:37:37.186113       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 18:37:37.186140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 18:37:37.186211       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:37:37.187530       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 18:37:37.187580       1 policy_source.go:224] refreshing policies
	I0725 18:37:37.227985       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:37:38.071545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:37:38.579691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 18:37:38.600855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 18:37:38.667317       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 18:37:38.706078       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:37:38.714799       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:37:49.976171       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 18:37:49.978323       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7] <==
	W0725 18:37:31.202510       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.202653       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.203969       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.234348       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.275401       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.294081       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.334706       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.354857       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.364959       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.367426       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.367466       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.370858       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.394654       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.451295       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.464589       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.528570       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.556182       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.567168       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.652895       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.654208       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.679284       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.707320       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.723895       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.765901       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.980283       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3] <==
	I0725 18:37:50.009637       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0725 18:37:50.009840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.466µs"
	I0725 18:37:50.022532       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0725 18:37:50.024066       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0725 18:37:50.026048       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0725 18:37:50.026851       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0725 18:37:50.029455       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0725 18:37:50.030806       1 shared_informer.go:320] Caches are synced for expand
	I0725 18:37:50.030873       1 shared_informer.go:320] Caches are synced for namespace
	I0725 18:37:50.043916       1 shared_informer.go:320] Caches are synced for ephemeral
	I0725 18:37:50.047302       1 shared_informer.go:320] Caches are synced for GC
	I0725 18:37:50.060920       1 shared_informer.go:320] Caches are synced for cronjob
	I0725 18:37:50.071813       1 shared_informer.go:320] Caches are synced for job
	I0725 18:37:50.075278       1 shared_informer.go:320] Caches are synced for taint
	I0725 18:37:50.075429       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0725 18:37:50.075552       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-669817"
	I0725 18:37:50.075598       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0725 18:37:50.081075       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0725 18:37:50.217796       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:37:50.221202       1 shared_informer.go:320] Caches are synced for disruption
	I0725 18:37:50.231516       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:37:50.257061       1 shared_informer.go:320] Caches are synced for stateful set
	I0725 18:37:50.656730       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:37:50.696104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:37:50.696139       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8] <==
	I0725 18:37:15.597738       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0725 18:37:15.597773       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0725 18:37:15.599509       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0725 18:37:15.599624       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0725 18:37:15.599722       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0725 18:37:15.602376       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0725 18:37:15.602405       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0725 18:37:15.602571       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0725 18:37:15.602599       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0725 18:37:15.602619       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0725 18:37:15.602650       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0725 18:37:15.605114       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0725 18:37:15.605440       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0725 18:37:15.605687       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0725 18:37:15.625898       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0725 18:37:15.626091       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0725 18:37:15.626124       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0725 18:37:15.631525       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0725 18:37:15.631582       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0725 18:37:15.631623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0725 18:37:15.632154       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0725 18:37:15.635074       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0725 18:37:15.635222       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0725 18:37:15.636298       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0725 18:37:15.648982       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411] <==
	I0725 18:37:37.855406       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:37:37.869245       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	I0725 18:37:37.918631       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:37:37.918690       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:37:37.918711       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:37:37.921634       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:37:37.921851       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:37:37.921873       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:37.923471       1 config.go:192] "Starting service config controller"
	I0725 18:37:37.923502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:37:37.923526       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:37:37.923530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:37:37.923887       1 config.go:319] "Starting node config controller"
	I0725 18:37:37.923917       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:37:38.024147       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:37:38.024267       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:37:38.024299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03] <==
	I0725 18:37:12.151357       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:37:13.588533       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	I0725 18:37:13.645031       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:37:13.645106       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:37:13.645121       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:37:13.648462       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:37:13.648740       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:37:13.648765       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:13.650696       1 config.go:319] "Starting node config controller"
	I0725 18:37:13.650721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:37:13.651499       1 config.go:192] "Starting service config controller"
	I0725 18:37:13.651584       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:37:13.651833       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:37:13.651887       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:37:13.751241       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:37:13.758953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:37:13.760490       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed] <==
	I0725 18:37:12.127884       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:37:13.542368       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:37:13.542457       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:37:13.542493       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:37:13.542517       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:37:13.583870       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:37:13.584621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:13.587317       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:37:13.587415       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:13.587563       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:37:13.587638       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:37:13.688761       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:21.863684       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0725 18:37:21.863806       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 18:37:21.863929       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 18:37:21.864485       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919] <==
	I0725 18:37:34.999147       1 serving.go:380] Generated self-signed cert in-memory
	I0725 18:37:37.194643       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:37:37.194751       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:37.200383       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:37:37.200478       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0725 18:37:37.200501       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 18:37:37.200529       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:37:37.208422       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:37:37.209433       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:37.209532       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0725 18:37:37.209558       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:37:37.301902       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0725 18:37:37.310699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:37:37.310830       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544769    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f489f4d5846c1eb526b11c16fac51984-k8s-certs\") pod \"kube-controller-manager-pause-669817\" (UID: \"f489f4d5846c1eb526b11c16fac51984\") " pod="kube-system/kube-controller-manager-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544784    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f489f4d5846c1eb526b11c16fac51984-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-669817\" (UID: \"f489f4d5846c1eb526b11c16fac51984\") " pod="kube-system/kube-controller-manager-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544800    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1852460407fc0267ac60e859363806f7-etcd-certs\") pod \"etcd-pause-669817\" (UID: \"1852460407fc0267ac60e859363806f7\") " pod="kube-system/etcd-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.642517    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: E0725 18:37:33.643423    3469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.815114    3469 scope.go:117] "RemoveContainer" containerID="c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.816143    3469 scope.go:117] "RemoveContainer" containerID="5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.818107    3469 scope.go:117] "RemoveContainer" containerID="910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.819476    3469 scope.go:117] "RemoveContainer" containerID="3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: E0725 18:37:33.941807    3469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-669817?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="800ms"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: I0725 18:37:34.045718    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: E0725 18:37:34.047519    3469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-669817"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: I0725 18:37:34.849802    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.205812    3469 kubelet_node_status.go:112] "Node was previously registered" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.206328    3469 kubelet_node_status.go:76] "Successfully registered node" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.208867    3469 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.210163    3469 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.317938    3469 apiserver.go:52] "Watching apiserver"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.324431    3469 topology_manager.go:215] "Topology Admit Handler" podUID="f8c1b738-b4ca-4606-b07d-d2ce0d5149a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jn9l2"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.324592    3469 topology_manager.go:215] "Topology Admit Handler" podUID="300b49b6-c6ee-4298-b856-0579eecc04f4" podNamespace="kube-system" podName="kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.341908    3469 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.416634    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/300b49b6-c6ee-4298-b856-0579eecc04f4-xtables-lock\") pod \"kube-proxy-m4njw\" (UID: \"300b49b6-c6ee-4298-b856-0579eecc04f4\") " pod="kube-system/kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.416774    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/300b49b6-c6ee-4298-b856-0579eecc04f4-lib-modules\") pod \"kube-proxy-m4njw\" (UID: \"300b49b6-c6ee-4298-b856-0579eecc04f4\") " pod="kube-system/kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.625965    3469 scope.go:117] "RemoveContainer" containerID="ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.627763    3469 scope.go:117] "RemoveContainer" containerID="0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-669817 -n pause-669817
helpers_test.go:261: (dbg) Run:  kubectl --context pause-669817 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-669817 -n pause-669817
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-669817 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-669817 logs -n 25: (1.674000725s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-160946      | minikube                 | jenkins | v1.26.0 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:37 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                          |         |         |                     |                     |
	|         |  --container-runtime=crio      |                          |         |         |                     |                     |
	| start   | -p pause-669817                | pause-669817             | jenkins | v1.33.1 | 25 Jul 24 18:36 UTC | 25 Jul 24 18:37 UTC |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p running-upgrade-919785      | running-upgrade-919785   | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	| start   | -p force-systemd-env-207395    | force-systemd-env-207395 | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	|         | --memory=2048                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| stop    | stopped-upgrade-160946 stop    | minikube                 | jenkins | v1.26.0 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	| start   | -p stopped-upgrade-160946      | stopped-upgrade-160946   | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | --memory=2200                  |                          |         |         |                     |                     |
	|         | --alsologtostderr              |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                          |         |         |                     |                     |
	|         | --container-runtime=crio       |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-207395    | force-systemd-env-207395 | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC | 25 Jul 24 18:37 UTC |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/nsswitch.conf             |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/hosts                     |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/resolv.conf               |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo crictl  | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | pods                           |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo crictl  | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | ps --all                       |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo find    | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/cni -type f -exec sh -c   |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo ip a s  | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	| ssh     | -p kubenet-889508 sudo ip r s  | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | iptables-save                  |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | iptables -t nat -L -n -v       |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | systemctl status kubelet --all |                          |         |         |                     |                     |
	|         | --full --no-pager              |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | systemctl cat kubelet          |                          |         |         |                     |                     |
	|         | --no-pager                     |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | journalctl -xeu kubelet --all  |                          |         |         |                     |                     |
	|         | --full --no-pager              |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf   |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /var/lib/kubelet/config.yaml   |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | systemctl status docker --all  |                          |         |         |                     |                     |
	|         | --full --no-pager              |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo         | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | systemctl cat docker           |                          |         |         |                     |                     |
	|         | --no-pager                     |                          |         |         |                     |                     |
	| ssh     | -p kubenet-889508 sudo cat     | kubenet-889508           | jenkins | v1.33.1 | 25 Jul 24 18:37 UTC |                     |
	|         | /etc/docker/daemon.json        |                          |         |         |                     |                     |
	|---------|--------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:37:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:37:26.879428   51390 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:37:26.879559   51390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:37:26.879568   51390 out.go:304] Setting ErrFile to fd 2...
	I0725 18:37:26.879574   51390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:37:26.879765   51390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:37:26.880308   51390 out.go:298] Setting JSON to false
	I0725 18:37:26.881354   51390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4791,"bootTime":1721927856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:37:26.881414   51390 start.go:139] virtualization: kvm guest
	I0725 18:37:26.883822   51390 out.go:177] * [stopped-upgrade-160946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:37:26.885322   51390 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:37:26.885360   51390 notify.go:220] Checking for updates...
	I0725 18:37:26.887941   51390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:37:26.889142   51390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:26.890333   51390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:37:26.891697   51390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:37:26.893323   51390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:37:26.895412   51390 config.go:182] Loaded profile config "stopped-upgrade-160946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0725 18:37:26.895907   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:26.895956   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:26.915999   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37709
	I0725 18:37:26.916354   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:26.916883   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:26.916910   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:26.917232   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:26.917413   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:26.919288   51390 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 18:37:26.920622   51390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:37:26.920978   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:26.921021   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:26.936666   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43681
	I0725 18:37:26.937109   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:26.937664   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:26.937691   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:26.938002   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:26.938277   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:26.979948   51390 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:37:26.981247   51390 start.go:297] selected driver: kvm2
	I0725 18:37:26.981266   51390 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-160946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-160
946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 18:37:26.981401   51390 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:37:26.982355   51390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:37:26.982450   51390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:37:26.997198   51390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:37:26.997579   51390 cni.go:84] Creating CNI manager for ""
	I0725 18:37:26.997594   51390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:26.997651   51390 start.go:340] cluster config:
	{Name:stopped-upgrade-160946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-160946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 18:37:26.997776   51390 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:37:26.999506   51390 out.go:177] * Starting "stopped-upgrade-160946" primary control-plane node in "stopped-upgrade-160946" cluster
	I0725 18:37:26.652534   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:26.652830   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:25.869270   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:25.869759   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:25.869788   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:25.869687   51181 retry.go:31] will retry after 2.011529099s: waiting for machine to come up
	I0725 18:37:27.882532   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:27.883005   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:27.883030   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:27.882971   51181 retry.go:31] will retry after 2.958130035s: waiting for machine to come up
	I0725 18:37:27.000740   51390 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0725 18:37:27.000786   51390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0725 18:37:27.000795   51390 cache.go:56] Caching tarball of preloaded images
	I0725 18:37:27.000864   51390 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:37:27.000874   51390 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0725 18:37:27.000966   51390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/stopped-upgrade-160946/config.json ...
	I0725 18:37:27.001163   51390 start.go:360] acquireMachinesLock for stopped-upgrade-160946: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:37:32.216067   50912 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442 ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03 c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258 3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed 5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7 910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 705b92bc54aa08201294e3ad0a9ec7e2f02880a086eda11d1a58dae73d4b13ed 519afe169828797418671c396000076f8634c6b49a2981a7c7516f12e89c80e1 2b6e0cfec3b4c6dea1f9d4dc34c4d7b7cf4b728f1365ca8a4abf03365602b28e abd981f0e09b9f6367b33ca9138d711db4fbe049eeb639769ae14a20a321477a f4e057b383c5b27de595a9cb12630c202609d011edc6e0560eb965028f7aa5a6: (20.535092336s)
	W0725 18:37:32.216145   50912 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442 ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03 c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258 3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed 5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7 910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 705b92bc54aa08201294e3ad0a9ec7e2f02880a086eda11d1a58dae73d4b13ed 519afe169828797418671c396000076f8634c6b49a2981a7c7516f12e89c80e1 2b6e0cfec3b4c6dea1f9d4dc34c4d7b7cf4b728f1365ca8a4abf03365602b28e abd981f0e09b9f6367b33ca9138d711db4fbe049eeb639769ae14a20a321477a f4e057b383c5b27de595a9cb12630c202609d011edc6e0560eb965028f7aa5a6: Process exited with status 1
	stdout:
	0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442
	ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03
	c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258
	3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed
	5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7
	910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8
	
	stderr:
	E0725 18:37:32.198472    3199 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": container with ID starting with 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 not found: ID does not exist" containerID="9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1"
	time="2024-07-25T18:37:32Z" level=fatal msg="stopping the container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": rpc error: code = NotFound desc = could not find container \"9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1\": container with ID starting with 9411e9f0c7fb83f449a53a714f631c15593478026a68966ec5446a78366a20d1 not found: ID does not exist"
	I0725 18:37:32.216196   50912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:37:32.255733   50912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:37:32.265718   50912 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 25 18:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 25 18:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 25 18:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 25 18:35 /etc/kubernetes/scheduler.conf
	
	I0725 18:37:32.265773   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:37:32.274483   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:37:32.282807   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:37:32.290907   50912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:37:32.290972   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:37:32.299298   50912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:37:32.307270   50912 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:37:32.307315   50912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:37:32.315632   50912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:37:32.323889   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:32.379132   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.013824   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.214772   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.281103   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:33.362915   50912 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:33.362993   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:33.864017   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:30.842495   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:30.842952   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find current IP address of domain force-systemd-env-207395 in network mk-force-systemd-env-207395
	I0725 18:37:30.842975   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | I0725 18:37:30.842891   51181 retry.go:31] will retry after 4.306112991s: waiting for machine to come up
	I0725 18:37:36.612945   51390 start.go:364] duration metric: took 9.611724038s to acquireMachinesLock for "stopped-upgrade-160946"
	I0725 18:37:36.613018   51390 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:37:36.613029   51390 fix.go:54] fixHost starting: 
	I0725 18:37:36.613457   51390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:36.613511   51390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:36.631531   51390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0725 18:37:36.632018   51390 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:36.632593   51390 main.go:141] libmachine: Using API Version  1
	I0725 18:37:36.632617   51390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:36.632952   51390 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:36.633143   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:36.633309   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetState
	I0725 18:37:36.634837   51390 fix.go:112] recreateIfNeeded on stopped-upgrade-160946: state=Stopped err=<nil>
	I0725 18:37:36.634860   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	W0725 18:37:36.634999   51390 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:37:36.637119   51390 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-160946" ...
	I0725 18:37:36.638524   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .Start
	I0725 18:37:36.638728   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring networks are active...
	I0725 18:37:36.639534   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring network default is active
	I0725 18:37:36.639921   51390 main.go:141] libmachine: (stopped-upgrade-160946) Ensuring network mk-stopped-upgrade-160946 is active
	I0725 18:37:36.640342   51390 main.go:141] libmachine: (stopped-upgrade-160946) Getting domain xml...
	I0725 18:37:36.641152   51390 main.go:141] libmachine: (stopped-upgrade-160946) Creating domain...
	I0725 18:37:35.154527   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.155009   51158 main.go:141] libmachine: (force-systemd-env-207395) Found IP for machine: 192.168.72.213
	I0725 18:37:35.155035   51158 main.go:141] libmachine: (force-systemd-env-207395) Reserving static IP address...
	I0725 18:37:35.155049   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has current primary IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.155454   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | unable to find host DHCP lease matching {name: "force-systemd-env-207395", mac: "52:54:00:5d:0f:d7", ip: "192.168.72.213"} in network mk-force-systemd-env-207395
	I0725 18:37:35.228956   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Getting to WaitForSSH function...
	I0725 18:37:35.228989   51158 main.go:141] libmachine: (force-systemd-env-207395) Reserved static IP address: 192.168.72.213
	I0725 18:37:35.229003   51158 main.go:141] libmachine: (force-systemd-env-207395) Waiting for SSH to be available...
	I0725 18:37:35.231631   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.232117   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.232140   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.232277   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using SSH client type: external
	I0725 18:37:35.232299   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa (-rw-------)
	I0725 18:37:35.232375   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:37:35.232396   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | About to run SSH command:
	I0725 18:37:35.232413   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | exit 0
	I0725 18:37:35.356671   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | SSH cmd err, output: <nil>: 
	I0725 18:37:35.357009   51158 main.go:141] libmachine: (force-systemd-env-207395) KVM machine creation complete!
	I0725 18:37:35.357317   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetConfigRaw
	I0725 18:37:35.358013   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:35.358214   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:35.358421   51158 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:37:35.358439   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:35.360001   51158 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:37:35.360018   51158 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:37:35.360026   51158 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:37:35.360035   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.362702   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.363147   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.363186   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.363314   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.363517   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.363694   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.363835   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.364051   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.364408   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.364427   51158 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:37:35.467507   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:37:35.467530   51158 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:37:35.467541   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.470207   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.470657   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.470695   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.470848   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.471003   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.471170   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.471335   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.471512   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.471689   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.471702   51158 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:37:35.580693   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:37:35.580752   51158 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:37:35.580759   51158 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:37:35.580768   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.581044   51158 buildroot.go:166] provisioning hostname "force-systemd-env-207395"
	I0725 18:37:35.581066   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.581280   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.584006   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.584403   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.584434   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.584598   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.584796   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.584965   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.585093   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.585309   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.585533   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.585556   51158 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-207395 && echo "force-systemd-env-207395" | sudo tee /etc/hostname
	I0725 18:37:35.706662   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-207395
	
	I0725 18:37:35.706700   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.709663   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.710074   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.710107   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.710330   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.710560   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.710732   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.710889   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.711091   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:35.711303   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:35.711329   51158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-207395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-207395/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-207395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:37:35.824397   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:37:35.824429   51158 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:37:35.824451   51158 buildroot.go:174] setting up certificates
	I0725 18:37:35.824484   51158 provision.go:84] configureAuth start
	I0725 18:37:35.824500   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetMachineName
	I0725 18:37:35.824778   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:35.827492   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.827989   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.828025   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.828188   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.830514   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.830850   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.830879   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.830977   51158 provision.go:143] copyHostCerts
	I0725 18:37:35.831006   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:37:35.831042   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:37:35.831062   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:37:35.831129   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:37:35.831244   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:37:35.831273   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:37:35.831279   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:37:35.831353   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:37:35.831461   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:37:35.831481   51158 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:37:35.831485   51158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:37:35.831520   51158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:37:35.831602   51158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-207395 san=[127.0.0.1 192.168.72.213 force-systemd-env-207395 localhost minikube]
	I0725 18:37:35.930422   51158 provision.go:177] copyRemoteCerts
	I0725 18:37:35.930483   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:37:35.930505   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:35.933395   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.933731   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:35.933761   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:35.933947   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:35.934123   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:35.934244   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:35.934361   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.018074   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0725 18:37:36.018152   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:37:36.042953   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0725 18:37:36.043026   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 18:37:36.068853   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0725 18:37:36.068934   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:37:36.093609   51158 provision.go:87] duration metric: took 269.109544ms to configureAuth
	I0725 18:37:36.093637   51158 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:37:36.093841   51158 config.go:182] Loaded profile config "force-systemd-env-207395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:36.093915   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.096740   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.097099   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.097130   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.097284   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.097495   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.097652   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.097835   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.097970   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:36.098135   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:36.098148   51158 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:37:36.372553   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:37:36.372582   51158 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:37:36.372594   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetURL
	I0725 18:37:36.373947   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | Using libvirt version 6000000
	I0725 18:37:36.376516   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.376893   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.376914   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.377130   51158 main.go:141] libmachine: Docker is up and running!
	I0725 18:37:36.377142   51158 main.go:141] libmachine: Reticulating splines...
	I0725 18:37:36.377148   51158 client.go:171] duration metric: took 21.156697171s to LocalClient.Create
	I0725 18:37:36.377166   51158 start.go:167] duration metric: took 21.15675888s to libmachine.API.Create "force-systemd-env-207395"
	I0725 18:37:36.377175   51158 start.go:293] postStartSetup for "force-systemd-env-207395" (driver="kvm2")
	I0725 18:37:36.377185   51158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:37:36.377201   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.377387   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:37:36.377407   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.379588   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.379897   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.379929   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.380078   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.380273   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.380442   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.380587   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.465651   51158 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:37:36.470404   51158 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:37:36.470435   51158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:37:36.470505   51158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:37:36.470601   51158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:37:36.470612   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /etc/ssl/certs/130592.pem
	I0725 18:37:36.470693   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:37:36.479530   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:36.501772   51158 start.go:296] duration metric: took 124.583545ms for postStartSetup
	I0725 18:37:36.501832   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetConfigRaw
	I0725 18:37:36.502439   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:36.505291   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.505695   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.505723   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.505966   51158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/config.json ...
	I0725 18:37:36.506128   51158 start.go:128] duration metric: took 21.30342098s to createHost
	I0725 18:37:36.506150   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.508241   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.508655   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.508697   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.508754   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.508948   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.509111   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.509302   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.509476   51158 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:36.509649   51158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0725 18:37:36.509666   51158 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:37:36.612772   51158 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932656.587664722
	
	I0725 18:37:36.612794   51158 fix.go:216] guest clock: 1721932656.587664722
	I0725 18:37:36.612804   51158 fix.go:229] Guest: 2024-07-25 18:37:36.587664722 +0000 UTC Remote: 2024-07-25 18:37:36.506139556 +0000 UTC m=+21.426247860 (delta=81.525166ms)
	I0725 18:37:36.612827   51158 fix.go:200] guest clock delta is within tolerance: 81.525166ms
	I0725 18:37:36.612833   51158 start.go:83] releasing machines lock for "force-systemd-env-207395", held for 21.410232895s
	I0725 18:37:36.612863   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.613120   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:36.616079   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.616477   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.616520   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.616680   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617215   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617437   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:36.617552   51158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:37:36.617608   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.617648   51158 ssh_runner.go:195] Run: cat /version.json
	I0725 18:37:36.617675   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:36.620631   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621623   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.621663   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621683   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.621951   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.622157   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.622163   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:36.622197   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:36.622256   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:36.622353   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.622435   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:36.622587   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:36.622599   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.622691   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:36.729608   51158 ssh_runner.go:195] Run: systemctl --version
	I0725 18:37:36.736844   51158 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:37:36.897908   51158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:37:36.907655   51158 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:37:36.907733   51158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:37:36.926819   51158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:37:36.926849   51158 start.go:495] detecting cgroup driver to use...
	I0725 18:37:36.926869   51158 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0725 18:37:36.926922   51158 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:37:36.945203   51158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:37:36.961321   51158 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:37:36.961395   51158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:37:36.979396   51158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:37:36.995540   51158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:37:37.133097   51158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:37:37.286124   51158 docker.go:233] disabling docker service ...
	I0725 18:37:37.286201   51158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:37:37.305141   51158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:37:37.323346   51158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:37:37.472041   51158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:37:37.605641   51158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:37:37.619013   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:37:37.635734   51158 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:37:37.635800   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.645868   51158 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0725 18:37:37.645941   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.656853   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.667507   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.681663   51158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:37:37.696044   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.710259   51158 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.732363   51158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:37.745912   51158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:37:37.758312   51158 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:37:37.758386   51158 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:37:37.773040   51158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:37:37.784057   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:37.945371   51158 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:37:38.091653   51158 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:37:38.091739   51158 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:37:38.096974   51158 start.go:563] Will wait 60s for crictl version
	I0725 18:37:38.097055   51158 ssh_runner.go:195] Run: which crictl
	I0725 18:37:38.101593   51158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:37:38.149371   51158 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:37:38.149463   51158 ssh_runner.go:195] Run: crio --version
	I0725 18:37:38.183854   51158 ssh_runner.go:195] Run: crio --version
	I0725 18:37:38.221940   51158 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:37:34.363464   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:34.378086   50912 api_server.go:72] duration metric: took 1.015171096s to wait for apiserver process to appear ...
	I0725 18:37:34.378114   50912 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:34.378135   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.102786   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:37:37.102819   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:37:37.102834   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.131736   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:37:37.131772   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:37:37.379217   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.384876   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:37:37.384908   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:37:37.878529   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:37.884844   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:37:37.884873   50912 api_server.go:103] status: https://192.168.61.203:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:37:38.378346   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:38.383766   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I0725 18:37:38.390454   50912 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:38.390484   50912 api_server.go:131] duration metric: took 4.012362885s to wait for apiserver health ...
	I0725 18:37:38.390495   50912 cni.go:84] Creating CNI manager for ""
	I0725 18:37:38.390503   50912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:38.392270   50912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:37:38.393562   50912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:37:38.406023   50912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:37:38.424867   50912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:38.439009   50912 system_pods.go:59] 6 kube-system pods found
	I0725 18:37:38.439048   50912 system_pods.go:61] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:37:38.439060   50912 system_pods.go:61] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:37:38.439075   50912 system_pods.go:61] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:37:38.439089   50912 system_pods.go:61] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:37:38.439100   50912 system_pods.go:61] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:37:38.439109   50912 system_pods.go:61] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:37:38.439121   50912 system_pods.go:74] duration metric: took 14.231502ms to wait for pod list to return data ...
	I0725 18:37:38.439134   50912 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:38.443801   50912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:38.443839   50912 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:38.443855   50912 node_conditions.go:105] duration metric: took 4.714582ms to run NodePressure ...
	I0725 18:37:38.443879   50912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:37:38.745063   50912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:37:38.750473   50912 kubeadm.go:739] kubelet initialised
	I0725 18:37:38.750492   50912 kubeadm.go:740] duration metric: took 5.402046ms waiting for restarted kubelet to initialise ...
	I0725 18:37:38.750500   50912 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:38.762530   50912 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:38.223338   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetIP
	I0725 18:37:38.226325   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:38.226705   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:38.226735   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:38.226963   51158 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:37:38.231001   51158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:37:38.244019   51158 kubeadm.go:883] updating cluster {Name:force-systemd-env-207395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:37:38.244142   51158 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:37:38.244207   51158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:38.284013   51158 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:37:38.284081   51158 ssh_runner.go:195] Run: which lz4
	I0725 18:37:38.287855   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0725 18:37:38.287962   51158 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:37:38.292420   51158 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:37:38.292455   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:37:39.563096   51158 crio.go:462] duration metric: took 1.275163257s to copy over tarball
	I0725 18:37:39.563183   51158 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:37:37.992221   51390 main.go:141] libmachine: (stopped-upgrade-160946) Waiting to get IP...
	I0725 18:37:37.993236   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:37.993688   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:37.993795   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:37.993687   51490 retry.go:31] will retry after 226.658501ms: waiting for machine to come up
	I0725 18:37:38.222415   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.223036   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.223064   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.222983   51490 retry.go:31] will retry after 273.378812ms: waiting for machine to come up
	I0725 18:37:38.498623   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.499101   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.499140   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.499058   51490 retry.go:31] will retry after 468.694129ms: waiting for machine to come up
	I0725 18:37:38.969952   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:38.970539   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:38.970564   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:38.970456   51490 retry.go:31] will retry after 523.855417ms: waiting for machine to come up
	I0725 18:37:39.496987   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:39.497615   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:39.497639   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:39.497572   51490 retry.go:31] will retry after 569.232898ms: waiting for machine to come up
	I0725 18:37:40.068462   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:40.069070   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:40.069129   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:40.069047   51490 retry.go:31] will retry after 646.366469ms: waiting for machine to come up
	I0725 18:37:40.716926   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:40.717443   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:40.717473   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:40.717397   51490 retry.go:31] will retry after 1.049207488s: waiting for machine to come up
	I0725 18:37:41.767965   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:41.768467   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:41.768500   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:41.768431   51490 retry.go:31] will retry after 988.54089ms: waiting for machine to come up
	I0725 18:37:39.769758   50912 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:39.769788   50912 pod_ready.go:81] duration metric: took 1.007226195s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:39.769800   50912 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:41.777724   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:41.807856   51158 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.244642165s)
	I0725 18:37:41.807914   51158 crio.go:469] duration metric: took 2.244786211s to extract the tarball
	I0725 18:37:41.807925   51158 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:37:41.844752   51158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:37:41.890282   51158 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:37:41.890336   51158 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:37:41.890347   51158 kubeadm.go:934] updating node { 192.168.72.213 8443 v1.30.3 crio true true} ...
	I0725 18:37:41.890519   51158 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-207395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:37:41.890604   51158 ssh_runner.go:195] Run: crio config
	I0725 18:37:41.941566   51158 cni.go:84] Creating CNI manager for ""
	I0725 18:37:41.941589   51158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:41.941598   51158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:37:41.941616   51158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-207395 NodeName:force-systemd-env-207395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:37:41.941750   51158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-207395"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:37:41.941807   51158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:37:41.951481   51158 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:37:41.951542   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:37:41.960748   51158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0725 18:37:41.976481   51158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:37:41.991765   51158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0725 18:37:42.007359   51158 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0725 18:37:42.011429   51158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:37:42.022719   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:42.140432   51158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:42.156977   51158 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395 for IP: 192.168.72.213
	I0725 18:37:42.157001   51158 certs.go:194] generating shared ca certs ...
	I0725 18:37:42.157034   51158 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.157269   51158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:37:42.157421   51158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:37:42.157440   51158 certs.go:256] generating profile certs ...
	I0725 18:37:42.157505   51158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key
	I0725 18:37:42.157519   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt with IP's: []
	I0725 18:37:42.302585   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt ...
	I0725 18:37:42.302613   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt: {Name:mk16be5c27c4cc6a0c88bb557b296ce31c7b5c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.302809   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key ...
	I0725 18:37:42.302826   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key: {Name:mkf4b82c57f53278e53d1e5096d1d42f0ac3abcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.302933   51158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c
	I0725 18:37:42.302958   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.213]
	I0725 18:37:42.452149   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c ...
	I0725 18:37:42.452178   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c: {Name:mke5e225d144aa993adea72f83b5ee090f705175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.452370   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c ...
	I0725 18:37:42.452390   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c: {Name:mk98b47279530bde8608b2babcc2d4a7e6997db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.452489   51158 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt.d9f0523c -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt
	I0725 18:37:42.452580   51158 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key.d9f0523c -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key
	I0725 18:37:42.452652   51158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key
	I0725 18:37:42.452672   51158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt with IP's: []
	I0725 18:37:42.653655   51158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt ...
	I0725 18:37:42.653685   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt: {Name:mk9dad70ca3c57f542f8832d0919be12950e9cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.653869   51158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key ...
	I0725 18:37:42.653896   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key: {Name:mk2f958654552c6ee67dc797e5e213774d84faa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:42.654004   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0725 18:37:42.654031   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0725 18:37:42.654050   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0725 18:37:42.654069   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0725 18:37:42.654086   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0725 18:37:42.654104   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0725 18:37:42.654121   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0725 18:37:42.654144   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0725 18:37:42.654210   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:37:42.654255   51158 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:37:42.654268   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:37:42.654309   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:37:42.654339   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:37:42.654364   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:37:42.654423   51158 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:42.654466   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.654502   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.654527   51158 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem -> /usr/share/ca-certificates/13059.pem
	I0725 18:37:42.655146   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:37:42.679476   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:37:42.701810   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:37:42.723515   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:37:42.744893   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 18:37:42.768275   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:37:42.791863   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:37:42.813264   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:37:42.835192   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:37:42.857209   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:37:42.881874   51158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:37:42.905690   51158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:37:42.926069   51158 ssh_runner.go:195] Run: openssl version
	I0725 18:37:42.944001   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:37:42.958630   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.964601   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.964680   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:37:42.970556   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:37:42.983587   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:37:42.995121   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.999400   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:42.999464   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:37:43.005473   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:37:43.016006   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:37:43.027325   51158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.031568   51158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.031617   51158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:37:43.037051   51158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:37:43.047294   51158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:37:43.051239   51158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:37:43.051298   51158 kubeadm.go:392] StartCluster: {Name:force-systemd-env-207395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:force-systemd-env-207395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:37:43.051387   51158 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:37:43.051481   51158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:37:43.088191   51158 cri.go:89] found id: ""
	I0725 18:37:43.088279   51158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:37:43.097882   51158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:37:43.111589   51158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:37:43.125586   51158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:37:43.125603   51158 kubeadm.go:157] found existing configuration files:
	
	I0725 18:37:43.125652   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:37:43.135488   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:37:43.135555   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:37:43.145341   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:37:43.154987   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:37:43.155061   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:37:43.164441   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:37:43.174530   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:37:43.174600   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:37:43.184867   51158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:37:43.194080   51158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:37:43.194151   51158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:37:43.203635   51158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:37:43.433075   51158 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:37:42.758771   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:42.759309   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:42.759341   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:42.759250   51490 retry.go:31] will retry after 1.591539118s: waiting for machine to come up
	I0725 18:37:44.352514   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:44.353082   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:44.353110   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:44.353034   51490 retry.go:31] will retry after 1.605092008s: waiting for machine to come up
	I0725 18:37:45.959813   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:45.960239   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:45.960260   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:45.960219   51490 retry.go:31] will retry after 1.977540708s: waiting for machine to come up
	I0725 18:37:44.279259   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:46.776746   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:48.776872   50912 pod_ready.go:102] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"False"
	I0725 18:37:46.651787   48054 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:37:46.652100   48054 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:37:47.939560   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:47.940071   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:47.940104   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:47.940015   51490 retry.go:31] will retry after 3.270081065s: waiting for machine to come up
	I0725 18:37:51.214315   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:51.214766   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | unable to find current IP address of domain stopped-upgrade-160946 in network mk-stopped-upgrade-160946
	I0725 18:37:51.214820   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | I0725 18:37:51.214704   51490 retry.go:31] will retry after 3.806476269s: waiting for machine to come up
	I0725 18:37:50.276650   50912 pod_ready.go:92] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:50.276678   50912 pod_ready.go:81] duration metric: took 10.506869395s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:50.276692   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:51.283526   50912 pod_ready.go:92] pod "kube-apiserver-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:51.283552   50912 pod_ready.go:81] duration metric: took 1.006847646s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:51.283565   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.291717   50912 pod_ready.go:92] pod "kube-controller-manager-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.291738   50912 pod_ready.go:81] duration metric: took 2.008166733s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.291747   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.301877   50912 pod_ready.go:92] pod "kube-proxy-m4njw" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.301898   50912 pod_ready.go:81] duration metric: took 10.144877ms for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.301907   50912 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.307052   50912 pod_ready.go:92] pod "kube-scheduler-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.307069   50912 pod_ready.go:81] duration metric: took 5.156035ms for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.307077   50912 pod_ready.go:38] duration metric: took 14.556568018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:53.307096   50912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:37:53.323687   50912 ops.go:34] apiserver oom_adj: -16
	I0725 18:37:53.323714   50912 kubeadm.go:597] duration metric: took 41.72887376s to restartPrimaryControlPlane
	I0725 18:37:53.323724   50912 kubeadm.go:394] duration metric: took 41.864609202s to StartCluster
	I0725 18:37:53.323743   50912 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:53.323815   50912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:53.326326   50912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:53.326607   50912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:37:53.326714   50912 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:37:53.326852   50912 config.go:182] Loaded profile config "pause-669817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:53.329022   50912 out.go:177] * Verifying Kubernetes components...
	I0725 18:37:53.329024   50912 out.go:177] * Enabled addons: 
	I0725 18:37:54.215952   51158 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 18:37:54.216027   51158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:37:54.216169   51158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:37:54.216294   51158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:37:54.216396   51158 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:37:54.216451   51158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:37:54.217956   51158 out.go:204]   - Generating certificates and keys ...
	I0725 18:37:54.218022   51158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:37:54.218092   51158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:37:54.218181   51158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:37:54.218258   51158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:37:54.218358   51158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:37:54.218425   51158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:37:54.218474   51158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:37:54.218648   51158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-207395 localhost] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0725 18:37:54.218723   51158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:37:54.218908   51158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-207395 localhost] and IPs [192.168.72.213 127.0.0.1 ::1]
	I0725 18:37:54.219015   51158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:37:54.219117   51158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:37:54.219176   51158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:37:54.219236   51158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:37:54.219282   51158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:37:54.219343   51158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 18:37:54.219390   51158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:37:54.219444   51158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:37:54.219489   51158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:37:54.219580   51158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:37:54.219648   51158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:37:54.221485   51158 out.go:204]   - Booting up control plane ...
	I0725 18:37:54.221590   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:37:54.221706   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:37:54.221792   51158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:37:54.221945   51158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:37:54.222061   51158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:37:54.222112   51158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:37:54.222263   51158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 18:37:54.222353   51158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 18:37:54.222429   51158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.734134ms
	I0725 18:37:54.222533   51158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 18:37:54.222617   51158 kubeadm.go:310] [api-check] The API server is healthy after 6.001493838s
	I0725 18:37:54.222728   51158 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 18:37:54.222843   51158 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 18:37:54.222893   51158 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 18:37:54.223092   51158 kubeadm.go:310] [mark-control-plane] Marking the node force-systemd-env-207395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 18:37:54.223155   51158 kubeadm.go:310] [bootstrap-token] Using token: w7ppv0.pn4hoefyzgyx4icy
	I0725 18:37:54.224446   51158 out.go:204]   - Configuring RBAC rules ...
	I0725 18:37:54.224570   51158 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 18:37:54.224682   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 18:37:54.224849   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 18:37:54.225036   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 18:37:54.225189   51158 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 18:37:54.225298   51158 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 18:37:54.225454   51158 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 18:37:54.225518   51158 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 18:37:54.225567   51158 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 18:37:54.225573   51158 kubeadm.go:310] 
	I0725 18:37:54.225623   51158 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 18:37:54.225627   51158 kubeadm.go:310] 
	I0725 18:37:54.225711   51158 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 18:37:54.225719   51158 kubeadm.go:310] 
	I0725 18:37:54.225751   51158 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 18:37:54.225843   51158 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 18:37:54.225918   51158 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 18:37:54.225926   51158 kubeadm.go:310] 
	I0725 18:37:54.225988   51158 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 18:37:54.225998   51158 kubeadm.go:310] 
	I0725 18:37:54.226058   51158 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 18:37:54.226068   51158 kubeadm.go:310] 
	I0725 18:37:54.226142   51158 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 18:37:54.226245   51158 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 18:37:54.226364   51158 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 18:37:54.226377   51158 kubeadm.go:310] 
	I0725 18:37:54.226496   51158 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 18:37:54.226602   51158 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 18:37:54.226614   51158 kubeadm.go:310] 
	I0725 18:37:54.226716   51158 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w7ppv0.pn4hoefyzgyx4icy \
	I0725 18:37:54.226848   51158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 18:37:54.226880   51158 kubeadm.go:310] 	--control-plane 
	I0725 18:37:54.226888   51158 kubeadm.go:310] 
	I0725 18:37:54.226992   51158 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 18:37:54.227004   51158 kubeadm.go:310] 
	I0725 18:37:54.227104   51158 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w7ppv0.pn4hoefyzgyx4icy \
	I0725 18:37:54.227243   51158 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 18:37:54.227259   51158 cni.go:84] Creating CNI manager for ""
	I0725 18:37:54.227268   51158 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:37:54.228705   51158 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:37:53.330242   50912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:53.330237   50912 addons.go:510] duration metric: took 3.528591ms for enable addons: enabled=[]
	I0725 18:37:53.493527   50912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:53.509187   50912 node_ready.go:35] waiting up to 6m0s for node "pause-669817" to be "Ready" ...
	I0725 18:37:53.512033   50912 node_ready.go:49] node "pause-669817" has status "Ready":"True"
	I0725 18:37:53.512055   50912 node_ready.go:38] duration metric: took 2.833884ms for node "pause-669817" to be "Ready" ...
	I0725 18:37:53.512065   50912 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:53.517519   50912 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.522252   50912 pod_ready.go:92] pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.522283   50912 pod_ready.go:81] duration metric: took 4.73509ms for pod "coredns-7db6d8ff4d-jn9l2" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.522295   50912 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.874217   50912 pod_ready.go:92] pod "etcd-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:53.874241   50912 pod_ready.go:81] duration metric: took 351.938362ms for pod "etcd-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:53.874256   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.230023   51158 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:37:54.242701   51158 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:37:54.259749   51158 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:37:54.259881   51158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:37:54.259895   51158 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-207395 minikube.k8s.io/updated_at=2024_07_25T18_37_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=force-systemd-env-207395 minikube.k8s.io/primary=true
	I0725 18:37:54.284240   51158 ops.go:34] apiserver oom_adj: -16
	I0725 18:37:54.456998   51158 kubeadm.go:1113] duration metric: took 197.181529ms to wait for elevateKubeSystemPrivileges
	I0725 18:37:54.457032   51158 kubeadm.go:394] duration metric: took 11.405739314s to StartCluster
	I0725 18:37:54.457064   51158 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:54.457162   51158 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:37:54.458530   51158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:37:54.458839   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 18:37:54.458871   51158 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:37:54.458923   51158 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:37:54.459011   51158 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-env-207395"
	I0725 18:37:54.459042   51158 addons.go:234] Setting addon storage-provisioner=true in "force-systemd-env-207395"
	I0725 18:37:54.459045   51158 addons.go:69] Setting default-storageclass=true in profile "force-systemd-env-207395"
	I0725 18:37:54.459117   51158 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-207395"
	I0725 18:37:54.459058   51158 config.go:182] Loaded profile config "force-systemd-env-207395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:37:54.459071   51158 host.go:66] Checking if "force-systemd-env-207395" exists ...
	I0725 18:37:54.459550   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.459577   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.459655   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.459696   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.460351   51158 out.go:177] * Verifying Kubernetes components...
	I0725 18:37:54.461638   51158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:54.475310   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I0725 18:37:54.475851   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.476458   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.476480   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.477052   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.477266   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.479113   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0725 18:37:54.479564   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.479977   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:54.480099   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.480124   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.480417   51158 cert_rotation.go:137] Starting client certificate rotation controller
	I0725 18:37:54.480481   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.480678   51158 addons.go:234] Setting addon default-storageclass=true in "force-systemd-env-207395"
	I0725 18:37:54.480720   51158 host.go:66] Checking if "force-systemd-env-207395" exists ...
	I0725 18:37:54.481042   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.481091   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.481178   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.481204   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.495515   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0725 18:37:54.495848   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.496278   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.496299   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.496606   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.496765   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.498455   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:54.499897   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I0725 18:37:54.500246   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.500749   51158 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:37:54.500805   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.500845   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.501198   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.501676   51158 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:37:54.501712   51158 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:37:54.501965   51158 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:37:54.501997   51158 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:37:54.502016   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:54.505369   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.505839   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:54.505868   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.506130   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:54.506294   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:54.506475   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:54.506610   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:54.517594   51158 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
	I0725 18:37:54.517959   51158 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:37:54.518387   51158 main.go:141] libmachine: Using API Version  1
	I0725 18:37:54.518407   51158 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:37:54.518705   51158 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:37:54.518909   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetState
	I0725 18:37:54.520642   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .DriverName
	I0725 18:37:54.520864   51158 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:37:54.520880   51158 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:37:54.520899   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHHostname
	I0725 18:37:54.524034   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.524549   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:0f:d7", ip: ""} in network mk-force-systemd-env-207395: {Iface:virbr4 ExpiryTime:2024-07-25 19:37:29 +0000 UTC Type:0 Mac:52:54:00:5d:0f:d7 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:force-systemd-env-207395 Clientid:01:52:54:00:5d:0f:d7}
	I0725 18:37:54.524580   51158 main.go:141] libmachine: (force-systemd-env-207395) DBG | domain force-systemd-env-207395 has defined IP address 192.168.72.213 and MAC address 52:54:00:5d:0f:d7 in network mk-force-systemd-env-207395
	I0725 18:37:54.524732   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHPort
	I0725 18:37:54.524937   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHKeyPath
	I0725 18:37:54.525084   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .GetSSHUsername
	I0725 18:37:54.525240   51158 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/force-systemd-env-207395/id_rsa Username:docker}
	I0725 18:37:54.621164   51158 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 18:37:54.674389   51158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:37:54.790449   51158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:37:54.899405   51158 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:37:55.002948   51158 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0725 18:37:55.003081   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.003100   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.003374   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.003395   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.003405   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.003414   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.003654   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.003668   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.003912   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:55.003933   51158 kapi.go:59] client config for force-systemd-env-207395: &rest.Config{Host:"https://192.168.72.213:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/profiles/force-systemd-env-207395/client.key", CAFile:"/home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 18:37:55.004263   51158 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:55.004360   51158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:55.022309   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.022334   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.022762   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.022780   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.274638   51158 api_server.go:72] duration metric: took 815.723494ms to wait for apiserver process to appear ...
	I0725 18:37:55.274665   51158 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:55.274692   51158 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0725 18:37:55.274787   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.274809   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.275078   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.275097   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.275108   51158 main.go:141] libmachine: Making call to close driver server
	I0725 18:37:55.275117   51158 main.go:141] libmachine: (force-systemd-env-207395) Calling .Close
	I0725 18:37:55.275372   51158 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:37:55.275386   51158 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:37:55.276940   51158 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 18:37:55.278241   51158 addons.go:510] duration metric: took 819.314788ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 18:37:55.280618   51158 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0725 18:37:55.281703   51158 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:55.281724   51158 api_server.go:131] duration metric: took 7.052659ms to wait for apiserver health ...
	I0725 18:37:55.281732   51158 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:55.289303   51158 system_pods.go:59] 5 kube-system pods found
	I0725 18:37:55.289342   51158 system_pods.go:61] "etcd-force-systemd-env-207395" [d4b528b3-2e0e-49ee-80ae-5c3105998951] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:37:55.289356   51158 system_pods.go:61] "kube-apiserver-force-systemd-env-207395" [26d68a04-6a4a-4d12-b4f4-695ec1b18105] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:37:55.289365   51158 system_pods.go:61] "kube-controller-manager-force-systemd-env-207395" [f34e647e-16a2-4a54-8545-c8a097cb04e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:37:55.289377   51158 system_pods.go:61] "kube-scheduler-force-systemd-env-207395" [26db469f-4c3d-4e1d-99d4-c3bef5370fa1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:37:55.289381   51158 system_pods.go:61] "storage-provisioner" [0afd08cc-3170-432c-b215-b394a5459a44] Pending
	I0725 18:37:55.289391   51158 system_pods.go:74] duration metric: took 7.65278ms to wait for pod list to return data ...
	I0725 18:37:55.289402   51158 kubeadm.go:582] duration metric: took 830.494999ms to wait for: map[apiserver:true system_pods:true]
	I0725 18:37:55.289417   51158 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:55.295247   51158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:55.295273   51158 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:55.295283   51158 node_conditions.go:105] duration metric: took 5.861744ms to run NodePressure ...
	I0725 18:37:55.295293   51158 start.go:241] waiting for startup goroutines ...
	I0725 18:37:55.507515   51158 kapi.go:214] "coredns" deployment in "kube-system" namespace and "force-systemd-env-207395" context rescaled to 1 replicas
	I0725 18:37:55.507551   51158 start.go:246] waiting for cluster config update ...
	I0725 18:37:55.507561   51158 start.go:255] writing updated cluster config ...
	I0725 18:37:55.507851   51158 ssh_runner.go:195] Run: rm -f paused
	I0725 18:37:55.558671   51158 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:37:55.560874   51158 out.go:177] * Done! kubectl is now configured to use "force-systemd-env-207395" cluster and "default" namespace by default
	I0725 18:37:54.273490   50912 pod_ready.go:92] pod "kube-apiserver-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:54.273512   50912 pod_ready.go:81] duration metric: took 399.248197ms for pod "kube-apiserver-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.273525   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.673467   50912 pod_ready.go:92] pod "kube-controller-manager-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:54.673503   50912 pod_ready.go:81] duration metric: took 399.964582ms for pod "kube-controller-manager-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:54.673517   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.074566   50912 pod_ready.go:92] pod "kube-proxy-m4njw" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:55.074592   50912 pod_ready.go:81] duration metric: took 401.06764ms for pod "kube-proxy-m4njw" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.074605   50912 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.477021   50912 pod_ready.go:92] pod "kube-scheduler-pause-669817" in "kube-system" namespace has status "Ready":"True"
	I0725 18:37:55.477046   50912 pod_ready.go:81] duration metric: took 402.433989ms for pod "kube-scheduler-pause-669817" in "kube-system" namespace to be "Ready" ...
	I0725 18:37:55.477054   50912 pod_ready.go:38] duration metric: took 1.964977117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:37:55.477068   50912 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:37:55.477118   50912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:37:55.491228   50912 api_server.go:72] duration metric: took 2.164581843s to wait for apiserver process to appear ...
	I0725 18:37:55.491260   50912 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:37:55.491282   50912 api_server.go:253] Checking apiserver healthz at https://192.168.61.203:8443/healthz ...
	I0725 18:37:55.499030   50912 api_server.go:279] https://192.168.61.203:8443/healthz returned 200:
	ok
	I0725 18:37:55.500226   50912 api_server.go:141] control plane version: v1.30.3
	I0725 18:37:55.500247   50912 api_server.go:131] duration metric: took 8.980564ms to wait for apiserver health ...
	I0725 18:37:55.500256   50912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:37:55.676584   50912 system_pods.go:59] 6 kube-system pods found
	I0725 18:37:55.676615   50912 system_pods.go:61] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running
	I0725 18:37:55.676622   50912 system_pods.go:61] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running
	I0725 18:37:55.676627   50912 system_pods.go:61] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running
	I0725 18:37:55.676633   50912 system_pods.go:61] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running
	I0725 18:37:55.676640   50912 system_pods.go:61] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running
	I0725 18:37:55.676649   50912 system_pods.go:61] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running
	I0725 18:37:55.676658   50912 system_pods.go:74] duration metric: took 176.39439ms to wait for pod list to return data ...
	I0725 18:37:55.676670   50912 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:37:55.875233   50912 default_sa.go:45] found service account: "default"
	I0725 18:37:55.875257   50912 default_sa.go:55] duration metric: took 198.575701ms for default service account to be created ...
	I0725 18:37:55.875269   50912 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:37:56.089282   50912 system_pods.go:86] 6 kube-system pods found
	I0725 18:37:56.089310   50912 system_pods.go:89] "coredns-7db6d8ff4d-jn9l2" [f8c1b738-b4ca-4606-b07d-d2ce0d5149a7] Running
	I0725 18:37:56.089318   50912 system_pods.go:89] "etcd-pause-669817" [30a01595-37b9-4c88-93a0-c8d38a35074f] Running
	I0725 18:37:56.089323   50912 system_pods.go:89] "kube-apiserver-pause-669817" [f7cd9a3e-3c9b-4f08-beef-22cb0162ac30] Running
	I0725 18:37:56.089330   50912 system_pods.go:89] "kube-controller-manager-pause-669817" [57a3f32e-0861-4993-900a-09bb3dad867d] Running
	I0725 18:37:56.089336   50912 system_pods.go:89] "kube-proxy-m4njw" [300b49b6-c6ee-4298-b856-0579eecc04f4] Running
	I0725 18:37:56.089341   50912 system_pods.go:89] "kube-scheduler-pause-669817" [d8654b2f-fa11-4f06-a7ee-ca40b65bdd83] Running
	I0725 18:37:56.089349   50912 system_pods.go:126] duration metric: took 214.073398ms to wait for k8s-apps to be running ...
	I0725 18:37:56.089359   50912 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:37:56.089408   50912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:37:56.103426   50912 system_svc.go:56] duration metric: took 14.05623ms WaitForService to wait for kubelet
	I0725 18:37:56.103459   50912 kubeadm.go:582] duration metric: took 2.776818378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:37:56.103478   50912 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:37:56.274232   50912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:37:56.274260   50912 node_conditions.go:123] node cpu capacity is 2
	I0725 18:37:56.274274   50912 node_conditions.go:105] duration metric: took 170.790378ms to run NodePressure ...
	I0725 18:37:56.274288   50912 start.go:241] waiting for startup goroutines ...
	I0725 18:37:56.274298   50912 start.go:246] waiting for cluster config update ...
	I0725 18:37:56.274308   50912 start.go:255] writing updated cluster config ...
	I0725 18:37:56.274668   50912 ssh_runner.go:195] Run: rm -f paused
	I0725 18:37:56.321524   50912 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:37:56.324945   50912 out.go:177] * Done! kubectl is now configured to use "pause-669817" cluster and "default" namespace by default
	I0725 18:37:55.022791   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.023384   51390 main.go:141] libmachine: (stopped-upgrade-160946) Found IP for machine: 192.168.39.235
	I0725 18:37:55.023411   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has current primary IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.023419   51390 main.go:141] libmachine: (stopped-upgrade-160946) Reserving static IP address...
	I0725 18:37:55.023949   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "stopped-upgrade-160946", mac: "52:54:00:67:70:08", ip: "192.168.39.235"} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.023983   51390 main.go:141] libmachine: (stopped-upgrade-160946) Reserved static IP address: 192.168.39.235
	I0725 18:37:55.024005   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | skip adding static IP to network mk-stopped-upgrade-160946 - found existing host DHCP lease matching {name: "stopped-upgrade-160946", mac: "52:54:00:67:70:08", ip: "192.168.39.235"}
	I0725 18:37:55.024027   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | Getting to WaitForSSH function...
	I0725 18:37:55.024043   51390 main.go:141] libmachine: (stopped-upgrade-160946) Waiting for SSH to be available...
	I0725 18:37:55.026179   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.026532   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.026563   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.026678   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | Using SSH client type: external
	I0725 18:37:55.026733   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa (-rw-------)
	I0725 18:37:55.026775   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:37:55.026793   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | About to run SSH command:
	I0725 18:37:55.026832   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | exit 0
	I0725 18:37:55.116906   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | SSH cmd err, output: <nil>: 
	I0725 18:37:55.117284   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetConfigRaw
	I0725 18:37:55.117857   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetIP
	I0725 18:37:55.120699   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.121064   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.121090   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.121366   51390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/stopped-upgrade-160946/config.json ...
	I0725 18:37:55.121596   51390 machine.go:94] provisionDockerMachine start ...
	I0725 18:37:55.121625   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:55.121835   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.124364   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.124737   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.124764   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.124909   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:55.125066   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.125242   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.125362   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:55.125511   51390 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:55.125763   51390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0725 18:37:55.125775   51390 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:37:55.239983   51390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:37:55.240016   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetMachineName
	I0725 18:37:55.240286   51390 buildroot.go:166] provisioning hostname "stopped-upgrade-160946"
	I0725 18:37:55.240333   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetMachineName
	I0725 18:37:55.240535   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.243661   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.244144   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.244175   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.244268   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:55.244451   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.244633   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.244748   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:55.244903   51390 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:55.245064   51390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0725 18:37:55.245078   51390 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-160946 && echo "stopped-upgrade-160946" | sudo tee /etc/hostname
	I0725 18:37:55.375943   51390 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-160946
	
	I0725 18:37:55.375972   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.378483   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.378854   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.378899   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.379136   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:55.379330   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.379487   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.379614   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:55.379753   51390 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:55.379916   51390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0725 18:37:55.379932   51390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-160946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-160946/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-160946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:37:55.507889   51390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:37:55.507914   51390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:37:55.507941   51390 buildroot.go:174] setting up certificates
	I0725 18:37:55.507952   51390 provision.go:84] configureAuth start
	I0725 18:37:55.507967   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetMachineName
	I0725 18:37:55.508230   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetIP
	I0725 18:37:55.511352   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.511842   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.511899   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.512097   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.514811   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.515261   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.515292   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.515484   51390 provision.go:143] copyHostCerts
	I0725 18:37:55.515550   51390 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:37:55.515561   51390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:37:55.515632   51390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:37:55.515768   51390 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:37:55.515780   51390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:37:55.515830   51390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:37:55.515921   51390 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:37:55.515932   51390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:37:55.515959   51390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:37:55.516052   51390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-160946 san=[127.0.0.1 192.168.39.235 localhost minikube stopped-upgrade-160946]
	I0725 18:37:55.690438   51390 provision.go:177] copyRemoteCerts
	I0725 18:37:55.690494   51390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:37:55.690518   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.693369   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.693789   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.693830   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.693974   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:55.694154   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.694322   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:55.694477   51390 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa Username:docker}
	I0725 18:37:55.785553   51390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:37:55.805366   51390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:37:55.823983   51390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:37:55.842535   51390 provision.go:87] duration metric: took 334.569818ms to configureAuth
	I0725 18:37:55.842560   51390 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:37:55.842739   51390 config.go:182] Loaded profile config "stopped-upgrade-160946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0725 18:37:55.842833   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:55.846988   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.847642   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:55.847677   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:55.847856   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:55.848040   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.848201   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:55.848409   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:55.848589   51390 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:55.848772   51390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0725 18:37:55.848791   51390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:37:56.134681   51390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:37:56.134728   51390 machine.go:97] duration metric: took 1.013091897s to provisionDockerMachine
	I0725 18:37:56.134741   51390 start.go:293] postStartSetup for "stopped-upgrade-160946" (driver="kvm2")
	I0725 18:37:56.134757   51390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:37:56.134791   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:56.135135   51390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:37:56.135182   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:56.450072   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.450510   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:56.450618   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.450736   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:56.450937   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:56.451094   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:56.451309   51390 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa Username:docker}
	I0725 18:37:56.537896   51390 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:37:56.541597   51390 info.go:137] Remote host: Buildroot 2021.02.12
	I0725 18:37:56.541620   51390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:37:56.541690   51390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:37:56.541798   51390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:37:56.541906   51390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:37:56.549716   51390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:37:56.569573   51390 start.go:296] duration metric: took 434.815487ms for postStartSetup
	I0725 18:37:56.569616   51390 fix.go:56] duration metric: took 19.956586887s for fixHost
	I0725 18:37:56.569638   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:56.572724   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.573120   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:56.573147   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.573277   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:56.573517   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:56.573707   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:56.573862   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:56.574008   51390 main.go:141] libmachine: Using SSH client type: native
	I0725 18:37:56.574215   51390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0725 18:37:56.574235   51390 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:37:56.696744   51390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932676.672335678
	
	I0725 18:37:56.696776   51390 fix.go:216] guest clock: 1721932676.672335678
	I0725 18:37:56.696785   51390 fix.go:229] Guest: 2024-07-25 18:37:56.672335678 +0000 UTC Remote: 2024-07-25 18:37:56.569620063 +0000 UTC m=+29.732569935 (delta=102.715615ms)
	I0725 18:37:56.696809   51390 fix.go:200] guest clock delta is within tolerance: 102.715615ms
	I0725 18:37:56.696819   51390 start.go:83] releasing machines lock for "stopped-upgrade-160946", held for 20.083829148s
	I0725 18:37:56.696843   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:56.697114   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetIP
	I0725 18:37:56.700229   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.700533   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:56.700555   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.700826   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:56.701420   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:56.701615   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .DriverName
	I0725 18:37:56.701717   51390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:37:56.701764   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:56.701785   51390 ssh_runner.go:195] Run: cat /version.json
	I0725 18:37:56.701807   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHHostname
	I0725 18:37:56.704748   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.705009   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.705179   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:56.705212   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.705439   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:56.705459   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:70:08", ip: ""} in network mk-stopped-upgrade-160946: {Iface:virbr3 ExpiryTime:2024-07-25 19:37:46 +0000 UTC Type:0 Mac:52:54:00:67:70:08 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:stopped-upgrade-160946 Clientid:01:52:54:00:67:70:08}
	I0725 18:37:56.705481   51390 main.go:141] libmachine: (stopped-upgrade-160946) DBG | domain stopped-upgrade-160946 has defined IP address 192.168.39.235 and MAC address 52:54:00:67:70:08 in network mk-stopped-upgrade-160946
	I0725 18:37:56.705800   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHPort
	I0725 18:37:56.705808   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:56.705995   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:56.706008   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHKeyPath
	I0725 18:37:56.706199   51390 main.go:141] libmachine: (stopped-upgrade-160946) Calling .GetSSHUsername
	I0725 18:37:56.706192   51390 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa Username:docker}
	I0725 18:37:56.706306   51390 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/stopped-upgrade-160946/id_rsa Username:docker}
	W0725 18:37:56.834560   51390 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0725 18:37:56.834638   51390 ssh_runner.go:195] Run: systemctl --version
	I0725 18:37:56.839781   51390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:37:56.981254   51390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:37:56.987510   51390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:37:56.987574   51390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:37:57.000824   51390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:37:57.000850   51390 start.go:495] detecting cgroup driver to use...
	I0725 18:37:57.000930   51390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:37:57.015536   51390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:37:57.027710   51390 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:37:57.027757   51390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:37:57.041253   51390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:37:57.052713   51390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:37:57.174451   51390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:37:57.320771   51390 docker.go:233] disabling docker service ...
	I0725 18:37:57.320833   51390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:37:57.333961   51390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:37:57.347890   51390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:37:57.481263   51390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:37:57.614682   51390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:37:57.628211   51390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:37:57.647862   51390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0725 18:37:57.647924   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.660228   51390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:37:57.660286   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.669524   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.680301   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.689054   51390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:37:57.697192   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.705255   51390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.721636   51390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:37:57.732502   51390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:37:57.740696   51390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:37:57.740772   51390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:37:57.753547   51390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:37:57.763912   51390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:37:57.898045   51390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:37:58.058039   51390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:37:58.058093   51390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:37:58.065263   51390 start.go:563] Will wait 60s for crictl version
	I0725 18:37:58.065318   51390 ssh_runner.go:195] Run: which crictl
	I0725 18:37:58.069449   51390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:37:58.109876   51390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0725 18:37:58.109969   51390 ssh_runner.go:195] Run: crio --version
	I0725 18:37:58.153995   51390 ssh_runner.go:195] Run: crio --version
	I0725 18:37:58.201724   51390 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	
	
	==> CRI-O <==
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.512306806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c7d55e7-5c8a-48a2-9142-1d309b1528c6 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.514395700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72e47630-fc69-4685-9214-933f372d6347 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.514991628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932679514955132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72e47630-fc69-4685-9214-933f372d6347 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.515782400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12c5eb4d-fb7f-4aaf-b21b-ccbe5598b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.515859619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12c5eb4d-fb7f-4aaf-b21b-ccbe5598b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.516508370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12c5eb4d-fb7f-4aaf-b21b-ccbe5598b1f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.567364381Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=06a2f1a2-0b64-4724-8cb7-8fa79f0ecbce name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.567633282Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&PodSandboxMetadata{Name:kube-proxy-m4njw,Uid:300b49b6-c6ee-4298-b856-0579eecc04f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932630322326740,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T18:36:12.137629514Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-669817,Uid:295ec64bbc1f5e73fbdb11a7575bfe24,Na
mespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932630275333611,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.203:8443,kubernetes.io/config.hash: 295ec64bbc1f5e73fbdb11a7575bfe24,kubernetes.io/config.seen: 2024-07-25T18:35:57.526841098Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-669817,Uid:24012bceb31a093e1cb89d0855f7612b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932630237700218,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-669817
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 24012bceb31a093e1cb89d0855f7612b,kubernetes.io/config.seen: 2024-07-25T18:35:57.526843767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jn9l2,Uid:f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932630228529308,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-25T18:36:12.268510843Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ca7513c4a1094a0328578c
3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&PodSandboxMetadata{Name:etcd-pause-669817,Uid:1852460407fc0267ac60e859363806f7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932630195889510,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.203:2379,kubernetes.io/config.hash: 1852460407fc0267ac60e859363806f7,kubernetes.io/config.seen: 2024-07-25T18:35:57.526808096Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-669817,Uid:f489f4d5846c1eb526b11c16fac51984,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721932629916056347,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f489f4d5846c1eb526b11c16fac51984,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f489f4d5846c1eb526b11c16fac51984,kubernetes.io/config.seen: 2024-07-25T18:35:57.526843015Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=06a2f1a2-0b64-4724-8cb7-8fa79f0ecbce name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.568389983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b486607-633f-4c5d-836f-88b2be72caf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.568473455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b486607-633f-4c5d-836f-88b2be72caf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.568861027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b486607-633f-4c5d-836f-88b2be72caf6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.583077018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de292429-57a6-4d3f-b164-388e0ba0cc48 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.583189505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de292429-57a6-4d3f-b164-388e0ba0cc48 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.584722554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c202425-9253-4790-b0c0-fdd0a27d3f86 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.585422530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932679585388254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c202425-9253-4790-b0c0-fdd0a27d3f86 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.586436842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51e6c276-d378-4020-8567-bbc485cf13a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.586523645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51e6c276-d378-4020-8567-bbc485cf13a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.586869160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51e6c276-d378-4020-8567-bbc485cf13a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.644803053Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d81eb6a8-0271-4528-983f-e8c60b3b9c90 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.644934798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d81eb6a8-0271-4528-983f-e8c60b3b9c90 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.646749225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfc74ae8-24d1-4f10-b767-c02066148894 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.647572143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721932679647531545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfc74ae8-24d1-4f10-b767-c02066148894 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.648516312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb0002bc-837e-42b5-8dc1-a46587021ba4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.648613727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb0002bc-837e-42b5-8dc1-a46587021ba4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:37:59 pause-669817 crio[2487]: time="2024-07-25 18:37:59.648977331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721932657661957940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a66bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721932657652802606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721932653872366467,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb3
1a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721932653852363356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721932653863886351,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 295ec64bbc1f5e73fbd
b11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721932653833600032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io
.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442,PodSandboxId:42743769d893702b2f33e48c8015a7d3cae5fd0e6183328062148c38ed9f07d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721932631089700861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn9l2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8c1b738-b4ca-4606-b07d-d2ce0d5149a7,},Annotations:map[string]string{io.kubernetes.container.hash: c6a6
6bab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03,PodSandboxId:be0c7bb84692cf4a8dc9a8eb85819351267acb58819f501945c218bd04f946f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721932630671857480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-m4njw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 300b49b6-c6ee-4298-b856-0579eecc04f4,},Annotations:map[string]string{io.kubernetes.container.hash: 4ba447d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed,PodSandboxId:7353b0f7fbe56e4431b03d09b899f744c10a49ad4b51b857a5a5509f06c635cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721932630522880708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24012bceb31a093e1cb89d0855f7612b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258,PodSandboxId:2ca7513c4a1094a0328578c3348f6d49ae70270f8ac4b649ee912ffbe98d8eec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721932630560496537,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-669817,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 1852460407fc0267ac60e859363806f7,},Annotations:map[string]string{io.kubernetes.container.hash: d3f0c38e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7,PodSandboxId:39effd372b6aea1edc7a401910858bcb5801e35d2bfdd0015bb3b8b08c127002,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721932630508145342,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 295ec64bbc1f5e73fbdb11a7575bfe24,},Annotations:map[string]string{io.kubernetes.container.hash: d4ccb54f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8,PodSandboxId:7a70bdf419db305749a2eff890a07867ab582c53624260caffc63773106d3781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721932630470342120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-669817,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: f489f4d5846c1eb526b11c16fac51984,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb0002bc-837e-42b5-8dc1-a46587021ba4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	62d2ab96b8ce3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago      Running             coredns                   2                   42743769d8937       coredns-7db6d8ff4d-jn9l2
	46918ce287ec6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   22 seconds ago      Running             kube-proxy                2                   be0c7bb84692c       kube-proxy-m4njw
	d656436d68b25       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   25 seconds ago      Running             kube-scheduler            2                   7353b0f7fbe56       kube-scheduler-pause-669817
	297d035de80b8       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   25 seconds ago      Running             kube-apiserver            2                   39effd372b6ae       kube-apiserver-pause-669817
	7ad8f7e807241       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   25 seconds ago      Running             kube-controller-manager   2                   7a70bdf419db3       kube-controller-manager-pause-669817
	fe591f55bbac2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago      Running             etcd                      2                   2ca7513c4a109       etcd-pause-669817
	0b2ab4d0d15a3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   48 seconds ago      Exited              coredns                   1                   42743769d8937       coredns-7db6d8ff4d-jn9l2
	ec10b979248ce       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   49 seconds ago      Exited              kube-proxy                1                   be0c7bb84692c       kube-proxy-m4njw
	c1e504cf40eba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   49 seconds ago      Exited              etcd                      1                   2ca7513c4a109       etcd-pause-669817
	3e6bd9a4d3c0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   49 seconds ago      Exited              kube-scheduler            1                   7353b0f7fbe56       kube-scheduler-pause-669817
	5c7008d55b151       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   49 seconds ago      Exited              kube-apiserver            1                   39effd372b6ae       kube-apiserver-pause-669817
	910591d676800       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   49 seconds ago      Exited              kube-controller-manager   1                   7a70bdf419db3       kube-controller-manager-pause-669817
	
	
	==> coredns [0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:35335 - 29942 "HINFO IN 4362430812324189062.9196917897299404625. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016876368s
	
	
	==> coredns [62d2ab96b8ce35108bf38ef19831b6202d91db2a2b6f6842a9d571aab5c4f81e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60185 - 4305 "HINFO IN 8691005056273813146.3009638441941347370. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006472656s
	
	
	==> describe nodes <==
	Name:               pause-669817
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-669817
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=pause-669817
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_35_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:35:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-669817
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:37:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:37:37 +0000   Thu, 25 Jul 2024 18:35:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.203
	  Hostname:    pause-669817
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 55c8c8190391493fb96119b8228073ce
	  System UUID:                55c8c819-0391-493f-b961-19b8228073ce
	  Boot ID:                    d397889d-837e-4880-88b5-0554ffd041a0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jn9l2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     107s
	  kube-system                 etcd-pause-669817                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-pause-669817             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-pause-669817    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-m4njw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-pause-669817             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node pause-669817 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node pause-669817 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node pause-669817 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s               kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  NodeReady                2m1s               kubelet          Node pause-669817 status is now: NodeReady
	  Normal  RegisteredNode           108s               node-controller  Node pause-669817 event: Registered Node pause-669817 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-669817 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-669817 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-669817 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-669817 event: Registered Node pause-669817 in Controller
	
	
	==> dmesg <==
	[  +9.860959] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062323] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061236] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.178226] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.122893] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.273943] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.246460] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +5.166557] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.063515] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498860] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.085098] kauditd_printk_skb: 69 callbacks suppressed
	[Jul25 18:36] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[  +0.116665] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.924200] kauditd_printk_skb: 86 callbacks suppressed
	[Jul25 18:37] systemd-fstab-generator[2407]: Ignoring "noauto" option for root device
	[  +0.167475] systemd-fstab-generator[2419]: Ignoring "noauto" option for root device
	[  +0.183727] systemd-fstab-generator[2434]: Ignoring "noauto" option for root device
	[  +0.159811] systemd-fstab-generator[2445]: Ignoring "noauto" option for root device
	[  +0.284786] systemd-fstab-generator[2473]: Ignoring "noauto" option for root device
	[  +3.668695] systemd-fstab-generator[2669]: Ignoring "noauto" option for root device
	[  +0.698945] kauditd_printk_skb: 185 callbacks suppressed
	[ +10.906815] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.266972] systemd-fstab-generator[3462]: Ignoring "noauto" option for root device
	[ +16.885998] kauditd_printk_skb: 52 callbacks suppressed
	[  +3.371105] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	
	
	==> etcd [c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258] <==
	{"level":"info","ts":"2024-07-25T18:37:11.303934Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:12.204101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 2"}
	{"level":"info","ts":"2024-07-25T18:37:12.204234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.204239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.20425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.20426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:12.206358Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-669817 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:37:12.206448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:12.206565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:12.209128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	{"level":"info","ts":"2024-07-25T18:37:12.210614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:37:12.212071Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:12.21212Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:21.678599Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-25T18:37:21.678693Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-669817","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"]}
	{"level":"warn","ts":"2024-07-25T18:37:21.678797Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.678937Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.698491Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.203:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T18:37:21.698534Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.203:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T18:37:21.698601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3dce464254b32e20","current-leader-member-id":"3dce464254b32e20"}
	{"level":"info","ts":"2024-07-25T18:37:21.701996Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:21.702192Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:21.70222Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-669817","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"]}
	
	
	==> etcd [fe591f55bbac211b40bf00e3b01b0680bd8746c3d7f35bcf10391e1354fe7210] <==
	{"level":"info","ts":"2024-07-25T18:37:34.178093Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:37:34.178135Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T18:37:34.179171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 switched to configuration voters=(4453574332218813984)"}
	{"level":"info","ts":"2024-07-25T18:37:34.179247Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","added-peer-id":"3dce464254b32e20","added-peer-peer-urls":["https://192.168.61.203:2380"]}
	{"level":"info","ts":"2024-07-25T18:37:34.179365Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"817eda555b894faf","local-member-id":"3dce464254b32e20","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:37:34.179409Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:37:34.186649Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:37:34.186885Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3dce464254b32e20","initial-advertise-peer-urls":["https://192.168.61.203:2380"],"listen-peer-urls":["https://192.168.61.203:2380"],"advertise-client-urls":["https://192.168.61.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:37:34.186926Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:37:34.187495Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:34.187547Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.203:2380"}
	{"level":"info","ts":"2024-07-25T18:37:35.654097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.6542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.654246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgPreVoteResp from 3dce464254b32e20 at term 3"}
	{"level":"info","ts":"2024-07-25T18:37:35.654315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became candidate at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 received MsgVoteResp from 3dce464254b32e20 at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dce464254b32e20 became leader at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.654339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dce464254b32e20 elected leader 3dce464254b32e20 at term 4"}
	{"level":"info","ts":"2024-07-25T18:37:35.660959Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3dce464254b32e20","local-member-attributes":"{Name:pause-669817 ClientURLs:[https://192.168.61.203:2379]}","request-path":"/0/members/3dce464254b32e20/attributes","cluster-id":"817eda555b894faf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:37:35.661166Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:35.661518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:37:35.663161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:37:35.663686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:35.663715Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:37:35.667929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.203:2379"}
	
	
	==> kernel <==
	 18:38:00 up 2 min,  0 users,  load average: 1.66, 0.72, 0.27
	Linux pause-669817 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [297d035de80b8ca5f90bb2ba4e9ca1e8d72c145b65b499384821b87ecd91270f] <==
	I0725 18:37:37.160367       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0725 18:37:37.160512       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:37:37.162670       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:37:37.170289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0725 18:37:37.172046       1 shared_informer.go:320] Caches are synced for configmaps
	I0725 18:37:37.172130       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 18:37:37.172140       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 18:37:37.172326       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0725 18:37:37.179180       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 18:37:37.185938       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 18:37:37.186052       1 aggregator.go:165] initial CRD sync complete...
	I0725 18:37:37.186113       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 18:37:37.186140       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 18:37:37.186211       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:37:37.187530       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0725 18:37:37.187580       1 policy_source.go:224] refreshing policies
	I0725 18:37:37.227985       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:37:38.071545       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:37:38.579691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 18:37:38.600855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 18:37:38.667317       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 18:37:38.706078       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:37:38.714799       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:37:49.976171       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 18:37:49.978323       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7] <==
	W0725 18:37:31.202510       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.202653       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.203969       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.234348       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.275401       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.294081       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.334706       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.354857       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.364959       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.367426       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.367466       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.370858       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.394654       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.451295       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.464589       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.528570       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.556182       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.567168       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.652895       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.654208       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.679284       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.707320       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.723895       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.765901       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0725 18:37:31.980283       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7ad8f7e8072412aef990e67c6187e763bcb0e17aa0e93d85fa4a8817901bfff3] <==
	I0725 18:37:50.009637       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0725 18:37:50.009840       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.466µs"
	I0725 18:37:50.022532       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0725 18:37:50.024066       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0725 18:37:50.026048       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0725 18:37:50.026851       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0725 18:37:50.029455       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0725 18:37:50.030806       1 shared_informer.go:320] Caches are synced for expand
	I0725 18:37:50.030873       1 shared_informer.go:320] Caches are synced for namespace
	I0725 18:37:50.043916       1 shared_informer.go:320] Caches are synced for ephemeral
	I0725 18:37:50.047302       1 shared_informer.go:320] Caches are synced for GC
	I0725 18:37:50.060920       1 shared_informer.go:320] Caches are synced for cronjob
	I0725 18:37:50.071813       1 shared_informer.go:320] Caches are synced for job
	I0725 18:37:50.075278       1 shared_informer.go:320] Caches are synced for taint
	I0725 18:37:50.075429       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0725 18:37:50.075552       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-669817"
	I0725 18:37:50.075598       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0725 18:37:50.081075       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0725 18:37:50.217796       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:37:50.221202       1 shared_informer.go:320] Caches are synced for disruption
	I0725 18:37:50.231516       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 18:37:50.257061       1 shared_informer.go:320] Caches are synced for stateful set
	I0725 18:37:50.656730       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:37:50.696104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 18:37:50.696139       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8] <==
	I0725 18:37:15.597738       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0725 18:37:15.597773       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0725 18:37:15.599509       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0725 18:37:15.599624       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0725 18:37:15.599722       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0725 18:37:15.602376       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0725 18:37:15.602405       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0725 18:37:15.602571       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0725 18:37:15.602599       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0725 18:37:15.602619       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0725 18:37:15.602650       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0725 18:37:15.605114       1 controllermanager.go:761] "Started controller" controller="daemonset-controller"
	I0725 18:37:15.605440       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0725 18:37:15.605687       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0725 18:37:15.625898       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0725 18:37:15.626091       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0725 18:37:15.626124       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0725 18:37:15.631525       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0725 18:37:15.631582       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0725 18:37:15.631623       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0725 18:37:15.632154       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0725 18:37:15.635074       1 controllermanager.go:761] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0725 18:37:15.635222       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0725 18:37:15.636298       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0725 18:37:15.648982       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [46918ce287ec6e5bb99ad04f129e2283864e62e60c410fb8291e431f8ff38411] <==
	I0725 18:37:37.855406       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:37:37.869245       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	I0725 18:37:37.918631       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:37:37.918690       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:37:37.918711       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:37:37.921634       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:37:37.921851       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:37:37.921873       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:37.923471       1 config.go:192] "Starting service config controller"
	I0725 18:37:37.923502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:37:37.923526       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:37:37.923530       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:37:37.923887       1 config.go:319] "Starting node config controller"
	I0725 18:37:37.923917       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:37:38.024147       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:37:38.024267       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:37:38.024299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03] <==
	I0725 18:37:12.151357       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:37:13.588533       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.203"]
	I0725 18:37:13.645031       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:37:13.645106       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:37:13.645121       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:37:13.648462       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:37:13.648740       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:37:13.648765       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:13.650696       1 config.go:319] "Starting node config controller"
	I0725 18:37:13.650721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:37:13.651499       1 config.go:192] "Starting service config controller"
	I0725 18:37:13.651584       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:37:13.651833       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:37:13.651887       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:37:13.751241       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:37:13.758953       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:37:13.760490       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed] <==
	I0725 18:37:12.127884       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:37:13.542368       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:37:13.542457       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:37:13.542493       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:37:13.542517       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:37:13.583870       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:37:13.584621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:13.587317       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:37:13.587415       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:13.587563       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:37:13.587638       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:37:13.688761       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:21.863684       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0725 18:37:21.863806       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 18:37:21.863929       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 18:37:21.864485       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d656436d68b25ea03d2fa894a0e4f5492f7f4eea72de6430931cbd7ffdc78919] <==
	I0725 18:37:34.999147       1 serving.go:380] Generated self-signed cert in-memory
	I0725 18:37:37.194643       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:37:37.194751       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:37:37.200383       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:37:37.200478       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0725 18:37:37.200501       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 18:37:37.200529       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:37:37.208422       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:37:37.209433       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:37:37.209532       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0725 18:37:37.209558       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:37:37.301902       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0725 18:37:37.310699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 18:37:37.310830       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544769    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f489f4d5846c1eb526b11c16fac51984-k8s-certs\") pod \"kube-controller-manager-pause-669817\" (UID: \"f489f4d5846c1eb526b11c16fac51984\") " pod="kube-system/kube-controller-manager-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544784    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f489f4d5846c1eb526b11c16fac51984-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-669817\" (UID: \"f489f4d5846c1eb526b11c16fac51984\") " pod="kube-system/kube-controller-manager-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.544800    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1852460407fc0267ac60e859363806f7-etcd-certs\") pod \"etcd-pause-669817\" (UID: \"1852460407fc0267ac60e859363806f7\") " pod="kube-system/etcd-pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.642517    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: E0725 18:37:33.643423    3469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-669817"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.815114    3469 scope.go:117] "RemoveContainer" containerID="c1e504cf40ebaf86f79fe983eb09f8ef20132fd5858870281f890985c2920258"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.816143    3469 scope.go:117] "RemoveContainer" containerID="5c7008d55b151f408b2e7ecb4de1a88b6a29ebf8d38acf693a48acc9a1ac40f7"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.818107    3469 scope.go:117] "RemoveContainer" containerID="910591d676800073f6b346d4ea07accac26d64ab5e065fb711ec7710fcbd5cc8"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: I0725 18:37:33.819476    3469 scope.go:117] "RemoveContainer" containerID="3e6bd9a4d3c0f16db9d5f7a8017206403fdc5a7c9a2ab6f63044c77f93f1e0ed"
	Jul 25 18:37:33 pause-669817 kubelet[3469]: E0725 18:37:33.941807    3469 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-669817?timeout=10s\": dial tcp 192.168.61.203:8443: connect: connection refused" interval="800ms"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: I0725 18:37:34.045718    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: E0725 18:37:34.047519    3469 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.203:8443: connect: connection refused" node="pause-669817"
	Jul 25 18:37:34 pause-669817 kubelet[3469]: I0725 18:37:34.849802    3469 kubelet_node_status.go:73] "Attempting to register node" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.205812    3469 kubelet_node_status.go:112] "Node was previously registered" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.206328    3469 kubelet_node_status.go:76] "Successfully registered node" node="pause-669817"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.208867    3469 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.210163    3469 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.317938    3469 apiserver.go:52] "Watching apiserver"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.324431    3469 topology_manager.go:215] "Topology Admit Handler" podUID="f8c1b738-b4ca-4606-b07d-d2ce0d5149a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jn9l2"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.324592    3469 topology_manager.go:215] "Topology Admit Handler" podUID="300b49b6-c6ee-4298-b856-0579eecc04f4" podNamespace="kube-system" podName="kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.341908    3469 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.416634    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/300b49b6-c6ee-4298-b856-0579eecc04f4-xtables-lock\") pod \"kube-proxy-m4njw\" (UID: \"300b49b6-c6ee-4298-b856-0579eecc04f4\") " pod="kube-system/kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.416774    3469 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/300b49b6-c6ee-4298-b856-0579eecc04f4-lib-modules\") pod \"kube-proxy-m4njw\" (UID: \"300b49b6-c6ee-4298-b856-0579eecc04f4\") " pod="kube-system/kube-proxy-m4njw"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.625965    3469 scope.go:117] "RemoveContainer" containerID="ec10b979248ce117551526d3280e46c1f56b4000c8daf523e37e8240a7220f03"
	Jul 25 18:37:37 pause-669817 kubelet[3469]: I0725 18:37:37.627763    3469 scope.go:117] "RemoveContainer" containerID="0b2ab4d0d15a3228bd1cfed04b05a2440e53ef6b20e34416a530d287e1a93442"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-669817 -n pause-669817
helpers_test.go:261: (dbg) Run:  kubectl --context pause-669817 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0725 18:39:12.056515   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.750241809s)

                                                
                                                
-- stdout --
	* [old-k8s-version-108542] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-108542" primary control-plane node in "old-k8s-version-108542" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:39:08.088303   55363 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:39:08.088449   55363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:39:08.088459   55363 out.go:304] Setting ErrFile to fd 2...
	I0725 18:39:08.088463   55363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:39:08.088626   55363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:39:08.089165   55363 out.go:298] Setting JSON to false
	I0725 18:39:08.090144   55363 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4892,"bootTime":1721927856,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:39:08.090202   55363 start.go:139] virtualization: kvm guest
	I0725 18:39:08.092479   55363 out.go:177] * [old-k8s-version-108542] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:39:08.094073   55363 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:39:08.094117   55363 notify.go:220] Checking for updates...
	I0725 18:39:08.096368   55363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:39:08.097568   55363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:39:08.098791   55363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:08.100154   55363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:39:08.101576   55363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:39:08.103602   55363 config.go:182] Loaded profile config "cert-expiration-979261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:39:08.103842   55363 config.go:182] Loaded profile config "force-systemd-flag-267077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:39:08.103968   55363 config.go:182] Loaded profile config "kubernetes-upgrade-069209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:39:08.104098   55363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:39:08.140965   55363 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 18:39:08.142112   55363 start.go:297] selected driver: kvm2
	I0725 18:39:08.142128   55363 start.go:901] validating driver "kvm2" against <nil>
	I0725 18:39:08.142137   55363 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:39:08.142846   55363 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:39:08.142908   55363 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:39:08.157612   55363 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:39:08.157651   55363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:39:08.157845   55363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:39:08.157870   55363 cni.go:84] Creating CNI manager for ""
	I0725 18:39:08.157878   55363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:39:08.157886   55363 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 18:39:08.157933   55363 start.go:340] cluster config:
	{Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:39:08.158026   55363 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:39:08.160378   55363 out.go:177] * Starting "old-k8s-version-108542" primary control-plane node in "old-k8s-version-108542" cluster
	I0725 18:39:08.161640   55363 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:39:08.161676   55363 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0725 18:39:08.161683   55363 cache.go:56] Caching tarball of preloaded images
	I0725 18:39:08.161767   55363 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:39:08.161777   55363 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0725 18:39:08.161858   55363 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:39:08.161872   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json: {Name:mkfcaced8366e382cd1a238674b5b7bf3fc3fd06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:39:08.162009   55363 start.go:360] acquireMachinesLock for old-k8s-version-108542: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:39:36.024884   55363 start.go:364] duration metric: took 27.86283111s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:39:36.024956   55363 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:39:36.025074   55363 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 18:39:36.027368   55363 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 18:39:36.027536   55363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:39:36.027579   55363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:39:36.044631   55363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0725 18:39:36.045014   55363 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:39:36.045568   55363 main.go:141] libmachine: Using API Version  1
	I0725 18:39:36.045592   55363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:39:36.045951   55363 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:39:36.046143   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:36.048489   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:36.048758   55363 start.go:159] libmachine.API.Create for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:39:36.048790   55363 client.go:168] LocalClient.Create starting
	I0725 18:39:36.048834   55363 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 18:39:36.048885   55363 main.go:141] libmachine: Decoding PEM data...
	I0725 18:39:36.048914   55363 main.go:141] libmachine: Parsing certificate...
	I0725 18:39:36.048990   55363 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 18:39:36.049020   55363 main.go:141] libmachine: Decoding PEM data...
	I0725 18:39:36.049042   55363 main.go:141] libmachine: Parsing certificate...
	I0725 18:39:36.049091   55363 main.go:141] libmachine: Running pre-create checks...
	I0725 18:39:36.049108   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .PreCreateCheck
	I0725 18:39:36.049581   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:39:36.050010   55363 main.go:141] libmachine: Creating machine...
	I0725 18:39:36.050028   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .Create
	I0725 18:39:36.050169   55363 main.go:141] libmachine: (old-k8s-version-108542) Creating KVM machine...
	I0725 18:39:36.051386   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found existing default KVM network
	I0725 18:39:36.052990   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:36.052822   55829 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015eb0}
	I0725 18:39:36.053041   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | created network xml: 
	I0725 18:39:36.053059   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | <network>
	I0725 18:39:36.053075   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   <name>mk-old-k8s-version-108542</name>
	I0725 18:39:36.053086   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   <dns enable='no'/>
	I0725 18:39:36.053096   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   
	I0725 18:39:36.053114   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 18:39:36.053137   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |     <dhcp>
	I0725 18:39:36.053154   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 18:39:36.053166   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |     </dhcp>
	I0725 18:39:36.053173   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   </ip>
	I0725 18:39:36.053180   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG |   
	I0725 18:39:36.053190   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | </network>
	I0725 18:39:36.053204   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | 
	I0725 18:39:36.059015   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | trying to create private KVM network mk-old-k8s-version-108542 192.168.39.0/24...
	I0725 18:39:36.127590   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | private KVM network mk-old-k8s-version-108542 192.168.39.0/24 created
	I0725 18:39:36.127636   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542 ...
	I0725 18:39:36.127654   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:36.127574   55829 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:36.127677   55363 main.go:141] libmachine: (old-k8s-version-108542) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 18:39:36.127744   55363 main.go:141] libmachine: (old-k8s-version-108542) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 18:39:36.379006   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:36.378882   55829 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa...
	I0725 18:39:36.604909   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:36.604791   55829 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/old-k8s-version-108542.rawdisk...
	I0725 18:39:36.604938   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Writing magic tar header
	I0725 18:39:36.604951   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Writing SSH key tar header
	I0725 18:39:36.604960   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:36.604897   55829 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542 ...
	I0725 18:39:36.605079   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542 (perms=drwx------)
	I0725 18:39:36.605101   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 18:39:36.605113   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542
	I0725 18:39:36.605128   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 18:39:36.605148   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:39:36.605176   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 18:39:36.605192   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 18:39:36.605206   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 18:39:36.605221   55363 main.go:141] libmachine: (old-k8s-version-108542) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 18:39:36.605233   55363 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:39:36.605249   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 18:39:36.605265   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 18:39:36.605276   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home/jenkins
	I0725 18:39:36.605284   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Checking permissions on dir: /home
	I0725 18:39:36.605297   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Skipping /home - not owner
	I0725 18:39:36.606274   55363 main.go:141] libmachine: (old-k8s-version-108542) define libvirt domain using xml: 
	I0725 18:39:36.606302   55363 main.go:141] libmachine: (old-k8s-version-108542) <domain type='kvm'>
	I0725 18:39:36.606318   55363 main.go:141] libmachine: (old-k8s-version-108542)   <name>old-k8s-version-108542</name>
	I0725 18:39:36.606343   55363 main.go:141] libmachine: (old-k8s-version-108542)   <memory unit='MiB'>2200</memory>
	I0725 18:39:36.606356   55363 main.go:141] libmachine: (old-k8s-version-108542)   <vcpu>2</vcpu>
	I0725 18:39:36.606363   55363 main.go:141] libmachine: (old-k8s-version-108542)   <features>
	I0725 18:39:36.606374   55363 main.go:141] libmachine: (old-k8s-version-108542)     <acpi/>
	I0725 18:39:36.606385   55363 main.go:141] libmachine: (old-k8s-version-108542)     <apic/>
	I0725 18:39:36.606397   55363 main.go:141] libmachine: (old-k8s-version-108542)     <pae/>
	I0725 18:39:36.606410   55363 main.go:141] libmachine: (old-k8s-version-108542)     
	I0725 18:39:36.606437   55363 main.go:141] libmachine: (old-k8s-version-108542)   </features>
	I0725 18:39:36.606461   55363 main.go:141] libmachine: (old-k8s-version-108542)   <cpu mode='host-passthrough'>
	I0725 18:39:36.606472   55363 main.go:141] libmachine: (old-k8s-version-108542)   
	I0725 18:39:36.606482   55363 main.go:141] libmachine: (old-k8s-version-108542)   </cpu>
	I0725 18:39:36.606493   55363 main.go:141] libmachine: (old-k8s-version-108542)   <os>
	I0725 18:39:36.606502   55363 main.go:141] libmachine: (old-k8s-version-108542)     <type>hvm</type>
	I0725 18:39:36.606514   55363 main.go:141] libmachine: (old-k8s-version-108542)     <boot dev='cdrom'/>
	I0725 18:39:36.606523   55363 main.go:141] libmachine: (old-k8s-version-108542)     <boot dev='hd'/>
	I0725 18:39:36.606549   55363 main.go:141] libmachine: (old-k8s-version-108542)     <bootmenu enable='no'/>
	I0725 18:39:36.606569   55363 main.go:141] libmachine: (old-k8s-version-108542)   </os>
	I0725 18:39:36.606578   55363 main.go:141] libmachine: (old-k8s-version-108542)   <devices>
	I0725 18:39:36.606588   55363 main.go:141] libmachine: (old-k8s-version-108542)     <disk type='file' device='cdrom'>
	I0725 18:39:36.606632   55363 main.go:141] libmachine: (old-k8s-version-108542)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/boot2docker.iso'/>
	I0725 18:39:36.606649   55363 main.go:141] libmachine: (old-k8s-version-108542)       <target dev='hdc' bus='scsi'/>
	I0725 18:39:36.606658   55363 main.go:141] libmachine: (old-k8s-version-108542)       <readonly/>
	I0725 18:39:36.606673   55363 main.go:141] libmachine: (old-k8s-version-108542)     </disk>
	I0725 18:39:36.606685   55363 main.go:141] libmachine: (old-k8s-version-108542)     <disk type='file' device='disk'>
	I0725 18:39:36.606694   55363 main.go:141] libmachine: (old-k8s-version-108542)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 18:39:36.606711   55363 main.go:141] libmachine: (old-k8s-version-108542)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/old-k8s-version-108542.rawdisk'/>
	I0725 18:39:36.606718   55363 main.go:141] libmachine: (old-k8s-version-108542)       <target dev='hda' bus='virtio'/>
	I0725 18:39:36.606731   55363 main.go:141] libmachine: (old-k8s-version-108542)     </disk>
	I0725 18:39:36.606744   55363 main.go:141] libmachine: (old-k8s-version-108542)     <interface type='network'>
	I0725 18:39:36.606752   55363 main.go:141] libmachine: (old-k8s-version-108542)       <source network='mk-old-k8s-version-108542'/>
	I0725 18:39:36.606760   55363 main.go:141] libmachine: (old-k8s-version-108542)       <model type='virtio'/>
	I0725 18:39:36.606771   55363 main.go:141] libmachine: (old-k8s-version-108542)     </interface>
	I0725 18:39:36.606781   55363 main.go:141] libmachine: (old-k8s-version-108542)     <interface type='network'>
	I0725 18:39:36.606791   55363 main.go:141] libmachine: (old-k8s-version-108542)       <source network='default'/>
	I0725 18:39:36.606800   55363 main.go:141] libmachine: (old-k8s-version-108542)       <model type='virtio'/>
	I0725 18:39:36.606810   55363 main.go:141] libmachine: (old-k8s-version-108542)     </interface>
	I0725 18:39:36.606823   55363 main.go:141] libmachine: (old-k8s-version-108542)     <serial type='pty'>
	I0725 18:39:36.606843   55363 main.go:141] libmachine: (old-k8s-version-108542)       <target port='0'/>
	I0725 18:39:36.606853   55363 main.go:141] libmachine: (old-k8s-version-108542)     </serial>
	I0725 18:39:36.606862   55363 main.go:141] libmachine: (old-k8s-version-108542)     <console type='pty'>
	I0725 18:39:36.606872   55363 main.go:141] libmachine: (old-k8s-version-108542)       <target type='serial' port='0'/>
	I0725 18:39:36.606883   55363 main.go:141] libmachine: (old-k8s-version-108542)     </console>
	I0725 18:39:36.606894   55363 main.go:141] libmachine: (old-k8s-version-108542)     <rng model='virtio'>
	I0725 18:39:36.606904   55363 main.go:141] libmachine: (old-k8s-version-108542)       <backend model='random'>/dev/random</backend>
	I0725 18:39:36.606918   55363 main.go:141] libmachine: (old-k8s-version-108542)     </rng>
	I0725 18:39:36.606929   55363 main.go:141] libmachine: (old-k8s-version-108542)     
	I0725 18:39:36.606939   55363 main.go:141] libmachine: (old-k8s-version-108542)     
	I0725 18:39:36.606950   55363 main.go:141] libmachine: (old-k8s-version-108542)   </devices>
	I0725 18:39:36.606957   55363 main.go:141] libmachine: (old-k8s-version-108542) </domain>
	I0725 18:39:36.606970   55363 main.go:141] libmachine: (old-k8s-version-108542) 
	I0725 18:39:36.611303   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:81:5d:6e in network default
	I0725 18:39:36.611953   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:36.611972   55363 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:39:36.612727   55363 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:39:36.613077   55363 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:39:36.613909   55363 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:39:36.614661   55363 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:39:38.136754   55363 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:39:38.137784   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:38.138366   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:38.138392   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:38.138276   55829 retry.go:31] will retry after 255.892486ms: waiting for machine to come up
	I0725 18:39:38.395922   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:38.396634   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:38.396664   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:38.396589   55829 retry.go:31] will retry after 259.432873ms: waiting for machine to come up
	I0725 18:39:38.658178   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:38.658726   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:38.658756   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:38.658679   55829 retry.go:31] will retry after 470.348484ms: waiting for machine to come up
	I0725 18:39:39.130233   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:39.130642   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:39.130665   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:39.130612   55829 retry.go:31] will retry after 391.408639ms: waiting for machine to come up
	I0725 18:39:39.523261   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:39.523827   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:39.523853   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:39.523794   55829 retry.go:31] will retry after 698.249711ms: waiting for machine to come up
	I0725 18:39:40.223635   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:40.224236   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:40.224265   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:40.224184   55829 retry.go:31] will retry after 648.316246ms: waiting for machine to come up
	I0725 18:39:40.873841   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:40.874296   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:40.874337   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:40.874248   55829 retry.go:31] will retry after 1.175111915s: waiting for machine to come up
	I0725 18:39:42.051218   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:42.051732   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:42.051761   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:42.051697   55829 retry.go:31] will retry after 1.22886765s: waiting for machine to come up
	I0725 18:39:43.282583   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:43.283057   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:43.283089   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:43.283000   55829 retry.go:31] will retry after 1.203690762s: waiting for machine to come up
	I0725 18:39:44.488534   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:44.488976   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:44.489011   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:44.488944   55829 retry.go:31] will retry after 1.88856032s: waiting for machine to come up
	I0725 18:39:46.379318   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:46.379805   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:46.379835   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:46.379747   55829 retry.go:31] will retry after 1.867968999s: waiting for machine to come up
	I0725 18:39:48.249726   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:48.250358   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:48.250387   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:48.250287   55829 retry.go:31] will retry after 2.336950458s: waiting for machine to come up
	I0725 18:39:50.589457   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:50.590022   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:50.590053   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:50.589945   55829 retry.go:31] will retry after 3.235588362s: waiting for machine to come up
	I0725 18:39:53.827361   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:53.827864   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:39:53.827887   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:39:53.827809   55829 retry.go:31] will retry after 3.500004092s: waiting for machine to come up
	I0725 18:39:57.329555   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.330152   55363 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:39:57.330175   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.330183   55363 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:39:57.330607   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542
	I0725 18:39:57.408265   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:39:57.408311   55363 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:39:57.408373   55363 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:39:57.411276   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.411733   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.411757   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.411955   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:39:57.411983   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:39:57.412022   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:39:57.412033   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:39:57.412075   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:39:57.544664   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:39:57.544944   55363 main.go:141] libmachine: (old-k8s-version-108542) KVM machine creation complete!
	I0725 18:39:57.545303   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:39:57.545986   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:57.546201   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:57.546394   55363 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 18:39:57.546412   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:39:57.547916   55363 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 18:39:57.547935   55363 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 18:39:57.547944   55363 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 18:39:57.547954   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.550582   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.551037   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.551079   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.551230   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.551402   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.551534   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.551698   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.551894   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.552076   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.552088   55363 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 18:39:57.667498   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:39:57.667531   55363 main.go:141] libmachine: Detecting the provisioner...
	I0725 18:39:57.667541   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.670600   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.671047   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.671080   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.671197   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.671394   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.671579   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.671763   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.671923   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.672243   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.672259   55363 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 18:39:57.784812   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 18:39:57.784871   55363 main.go:141] libmachine: found compatible host: buildroot
	I0725 18:39:57.784879   55363 main.go:141] libmachine: Provisioning with buildroot...
	I0725 18:39:57.784890   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:57.785189   55363 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:39:57.785222   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:57.785436   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.788306   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.788747   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.788788   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.788974   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.789142   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.789331   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.789475   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.789673   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.789898   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.789916   55363 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:39:57.917676   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:39:57.917713   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:57.920786   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.921301   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:57.921334   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:57.921518   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:57.921725   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.921942   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:57.922087   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:57.922291   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:57.922498   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:57.922522   55363 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:39:58.053079   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:39:58.053111   55363 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:39:58.053172   55363 buildroot.go:174] setting up certificates
	I0725 18:39:58.053182   55363 provision.go:84] configureAuth start
	I0725 18:39:58.053204   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:39:58.053513   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:58.056481   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.056860   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.056891   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.057079   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.059354   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.059764   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.059789   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.059991   55363 provision.go:143] copyHostCerts
	I0725 18:39:58.060076   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:39:58.060096   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:39:58.060169   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:39:58.060345   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:39:58.060360   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:39:58.060402   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:39:58.060506   55363 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:39:58.060517   55363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:39:58.060546   55363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:39:58.060618   55363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:39:58.385792   55363 provision.go:177] copyRemoteCerts
	I0725 18:39:58.385865   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:39:58.385913   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.389196   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.389616   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.389646   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.389890   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.390102   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.390308   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.390457   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:58.478436   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:39:58.501251   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:39:58.527373   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:39:58.549842   55363 provision.go:87] duration metric: took 496.643249ms to configureAuth
	I0725 18:39:58.549872   55363 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:39:58.550076   55363 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:39:58.550159   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.552643   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.553003   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.553034   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.553164   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.553368   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.553557   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.553700   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.553938   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:58.554160   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:58.554177   55363 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:39:58.825636   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:39:58.825665   55363 main.go:141] libmachine: Checking connection to Docker...
	I0725 18:39:58.825676   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetURL
	I0725 18:39:58.826956   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using libvirt version 6000000
	I0725 18:39:58.829526   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.829895   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.829917   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.830108   55363 main.go:141] libmachine: Docker is up and running!
	I0725 18:39:58.830124   55363 main.go:141] libmachine: Reticulating splines...
	I0725 18:39:58.830131   55363 client.go:171] duration metric: took 22.781331722s to LocalClient.Create
	I0725 18:39:58.830158   55363 start.go:167] duration metric: took 22.781400806s to libmachine.API.Create "old-k8s-version-108542"
	I0725 18:39:58.830171   55363 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:39:58.830205   55363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:39:58.830227   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:58.830470   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:39:58.830495   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.832941   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.833399   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.833426   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.833564   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.833719   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.833856   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.833990   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:58.918647   55363 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:39:58.922535   55363 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:39:58.922561   55363 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:39:58.922626   55363 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:39:58.922709   55363 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:39:58.922795   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:39:58.931764   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:39:58.954933   55363 start.go:296] duration metric: took 124.733843ms for postStartSetup
	I0725 18:39:58.954985   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:39:58.955577   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:58.958702   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.959459   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.959496   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.959717   55363 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:39:58.959955   55363 start.go:128] duration metric: took 22.93486958s to createHost
	I0725 18:39:58.959982   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:58.962374   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.962692   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:58.962719   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:58.962843   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:58.963023   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.963240   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:58.963443   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:58.963592   55363 main.go:141] libmachine: Using SSH client type: native
	I0725 18:39:58.963772   55363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:39:58.963858   55363 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 18:39:59.076848   55363 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721932799.042325621
	
	I0725 18:39:59.076869   55363 fix.go:216] guest clock: 1721932799.042325621
	I0725 18:39:59.076878   55363 fix.go:229] Guest: 2024-07-25 18:39:59.042325621 +0000 UTC Remote: 2024-07-25 18:39:58.959970358 +0000 UTC m=+50.903762414 (delta=82.355263ms)
	I0725 18:39:59.076925   55363 fix.go:200] guest clock delta is within tolerance: 82.355263ms
	I0725 18:39:59.076933   55363 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 23.052020473s
	I0725 18:39:59.076967   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.077243   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:39:59.080294   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.080660   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.080689   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.080923   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081563   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081738   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:39:59.081816   55363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:39:59.081870   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:59.082011   55363 ssh_runner.go:195] Run: cat /version.json
	I0725 18:39:59.082040   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:39:59.085042   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085208   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085413   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.085442   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085604   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:59.085615   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:39:59.085651   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:39:59.085775   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:59.085838   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:39:59.085947   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:59.086033   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:39:59.086107   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:59.086469   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:39:59.086690   55363 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:39:59.209375   55363 ssh_runner.go:195] Run: systemctl --version
	I0725 18:39:59.215649   55363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:39:59.376812   55363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:39:59.382914   55363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:39:59.382994   55363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:39:59.399576   55363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:39:59.399602   55363 start.go:495] detecting cgroup driver to use...
	I0725 18:39:59.399665   55363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:39:59.417234   55363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:39:59.430697   55363 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:39:59.430764   55363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:39:59.446466   55363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:39:59.460997   55363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:39:59.585882   55363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:39:59.730360   55363 docker.go:233] disabling docker service ...
	I0725 18:39:59.730417   55363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:39:59.748258   55363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:39:59.761130   55363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:39:59.905267   55363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:40:00.024831   55363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:40:00.039802   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:40:00.057521   55363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:40:00.057574   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.066917   55363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:40:00.066992   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.076664   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.086490   55363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:40:00.095845   55363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:40:00.105984   55363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:40:00.114771   55363 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:40:00.114833   55363 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:40:00.127039   55363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:40:00.136671   55363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:00.266606   55363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:40:00.413007   55363 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:40:00.413084   55363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:40:00.417628   55363 start.go:563] Will wait 60s for crictl version
	I0725 18:40:00.417694   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:00.421134   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:40:00.460209   55363 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:40:00.460295   55363 ssh_runner.go:195] Run: crio --version
	I0725 18:40:00.487130   55363 ssh_runner.go:195] Run: crio --version
	I0725 18:40:00.519004   55363 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:40:00.520234   55363 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:40:00.523718   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:40:00.524159   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:40:00.524195   55363 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:40:00.524432   55363 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:40:00.529740   55363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:40:00.545736   55363 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:40:00.545874   55363 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:40:00.545935   55363 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:00.584454   55363 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:40:00.584531   55363 ssh_runner.go:195] Run: which lz4
	I0725 18:40:00.588881   55363 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0725 18:40:00.592738   55363 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:40:00.592795   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:40:02.105945   55363 crio.go:462] duration metric: took 1.517092405s to copy over tarball
	I0725 18:40:02.106041   55363 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:40:04.747559   55363 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.641480879s)
	I0725 18:40:04.747599   55363 crio.go:469] duration metric: took 2.641617846s to extract the tarball
	I0725 18:40:04.747610   55363 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:40:04.789962   55363 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:40:04.835104   55363 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:40:04.835134   55363 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:40:04.835204   55363 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:04.835261   55363 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:04.835269   55363 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:04.835283   55363 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:04.835244   55363 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:04.835325   55363 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:40:04.835244   55363 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:04.835561   55363 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:04.836832   55363 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:40:04.836851   55363 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:04.836858   55363 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:04.836831   55363 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:04.836877   55363 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:04.836832   55363 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:04.836835   55363 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:04.837185   55363 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.063753   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.074051   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:40:05.092907   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.093187   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.095225   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.106782   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.146697   55363 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:40:05.146761   55363 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.146829   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.159970   55363 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:40:05.160009   55363 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:40:05.160052   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.199196   55363 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:40:05.199241   55363 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.199291   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.200833   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.250337   55363 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:40:05.250385   55363 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.250435   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.256798   55363 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:40:05.256840   55363 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.256859   55363 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:40:05.256880   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.256888   55363 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.256912   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:40:05.256931   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:40:05.256936   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.257017   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:40:05.274907   55363 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:40:05.274948   55363 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.274944   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:40:05.275006   55363 ssh_runner.go:195] Run: which crictl
	I0725 18:40:05.275947   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:40:05.365454   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:40:05.365502   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:40:05.365541   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:40:05.365593   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:40:05.365601   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:40:05.365652   55363 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:40:05.375451   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:40:05.410490   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:40:05.410755   55363 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:40:05.679221   55363 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:40:05.824392   55363 cache_images.go:92] duration metric: took 989.23997ms to LoadCachedImages
	W0725 18:40:05.824487   55363 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0725 18:40:05.824505   55363 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:40:05.824653   55363 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:40:05.824735   55363 ssh_runner.go:195] Run: crio config
	I0725 18:40:05.872925   55363 cni.go:84] Creating CNI manager for ""
	I0725 18:40:05.872942   55363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:40:05.872950   55363 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:40:05.872967   55363 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:40:05.873082   55363 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:40:05.873156   55363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:40:05.883172   55363 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:40:05.883242   55363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:40:05.892444   55363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:40:05.910094   55363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:40:05.927108   55363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:40:05.944764   55363 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:40:05.948359   55363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:40:05.960525   55363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:40:06.091832   55363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:40:06.108764   55363 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:40:06.108788   55363 certs.go:194] generating shared ca certs ...
	I0725 18:40:06.108807   55363 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.108952   55363 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:40:06.109018   55363 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:40:06.109030   55363 certs.go:256] generating profile certs ...
	I0725 18:40:06.109096   55363 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:40:06.109114   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt with IP's: []
	I0725 18:40:06.211721   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt ...
	I0725 18:40:06.211754   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: {Name:mk8328536e6d3e3be7b69becd8ce6118480d4a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.211946   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key ...
	I0725 18:40:06.211965   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key: {Name:mk8e33c79977a60da7b73fdc37309f0c31106033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.212070   55363 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:40:06.212090   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.29]
	I0725 18:40:06.367802   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 ...
	I0725 18:40:06.367833   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0: {Name:mk91608cd2a2de482eeb1632fee3d4305bd1201d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.368013   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0 ...
	I0725 18:40:06.368031   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0: {Name:mk4bb7258cc724f63f302746925003a7acfe5435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.368122   55363 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt.da8b5ed0 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt
	I0725 18:40:06.368242   55363 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key
	I0725 18:40:06.368345   55363 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:40:06.368369   55363 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt with IP's: []
	I0725 18:40:06.502724   55363 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt ...
	I0725 18:40:06.502756   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt: {Name:mk92d2a6177ff7a114ce9ed043355ebaa1c7b554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.582324   55363 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key ...
	I0725 18:40:06.582381   55363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key: {Name:mk0843904be2b18411f9215c4b88ee807d70f9ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:40:06.582628   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:40:06.582680   55363 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:40:06.582696   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:40:06.582724   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:40:06.582752   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:40:06.582776   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:40:06.582823   55363 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:40:06.583537   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:40:06.610789   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:40:06.637514   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:40:06.666644   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:40:06.689890   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:40:06.729398   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:40:06.755382   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:40:06.780347   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:40:06.805821   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:40:06.828814   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:40:06.852649   55363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:40:06.876771   55363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:40:06.893439   55363 ssh_runner.go:195] Run: openssl version
	I0725 18:40:06.898916   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:40:06.909715   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.913779   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.913835   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:40:06.919070   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:40:06.929184   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:40:06.939501   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.943820   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.943885   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:40:06.949830   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:40:06.963133   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:40:06.974988   55363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.983118   55363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.983196   55363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:40:06.992387   55363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:40:07.006291   55363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:40:07.010904   55363 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:40:07.010974   55363 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:40:07.011080   55363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:40:07.011158   55363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:40:07.065948   55363 cri.go:89] found id: ""
	I0725 18:40:07.066029   55363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:40:07.076406   55363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:40:07.085776   55363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:40:07.094830   55363 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:40:07.094850   55363 kubeadm.go:157] found existing configuration files:
	
	I0725 18:40:07.094890   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:40:07.103810   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:40:07.103880   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:40:07.113126   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:40:07.121382   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:40:07.121441   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:40:07.129987   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:40:07.138871   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:40:07.138935   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:40:07.147572   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:40:07.159201   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:40:07.159267   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:40:07.168883   55363 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:40:07.286011   55363 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:40:07.286257   55363 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:40:07.441437   55363 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:40:07.441654   55363 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:40:07.441804   55363 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:40:07.636278   55363 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:40:07.848915   55363 out.go:204]   - Generating certificates and keys ...
	I0725 18:40:07.849053   55363 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:40:07.849155   55363 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:40:07.849254   55363 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:40:07.927608   55363 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:40:08.047052   55363 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:40:08.153679   55363 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:40:08.307176   55363 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:40:08.307492   55363 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0725 18:40:08.375108   55363 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:40:08.375273   55363 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	I0725 18:40:08.586311   55363 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:40:08.691328   55363 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:40:08.744404   55363 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:40:08.744656   55363 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:40:08.956232   55363 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:40:09.604946   55363 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:40:09.900696   55363 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:40:10.076436   55363 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:40:10.094477   55363 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:40:10.094611   55363 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:40:10.094673   55363 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:40:10.240894   55363 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:40:10.397143   55363 out.go:204]   - Booting up control plane ...
	I0725 18:40:10.397296   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:40:10.397398   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:40:10.397492   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:40:10.397598   55363 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:40:10.397786   55363 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:40:50.253327   55363 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:40:50.253457   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:40:50.253749   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:40:55.253214   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:40:55.253525   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:41:05.252804   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:41:05.253106   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:41:25.253381   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:41:25.253547   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:42:05.255183   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:42:05.255426   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:42:05.255441   55363 kubeadm.go:310] 
	I0725 18:42:05.255526   55363 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:42:05.255562   55363 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:42:05.255568   55363 kubeadm.go:310] 
	I0725 18:42:05.255628   55363 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:42:05.255683   55363 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:42:05.255824   55363 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:42:05.255837   55363 kubeadm.go:310] 
	I0725 18:42:05.255979   55363 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:42:05.256022   55363 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:42:05.256054   55363 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:42:05.256063   55363 kubeadm.go:310] 
	I0725 18:42:05.256188   55363 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:42:05.256301   55363 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:42:05.256313   55363 kubeadm.go:310] 
	I0725 18:42:05.256473   55363 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:42:05.256598   55363 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:42:05.256701   55363 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:42:05.256786   55363 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:42:05.256802   55363 kubeadm.go:310] 
	I0725 18:42:05.257329   55363 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:42:05.257430   55363 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:42:05.257484   55363 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:42:05.257593   55363 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-108542] and IPs [192.168.39.29 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:42:05.257636   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:42:05.731633   55363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:42:05.745419   55363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:42:05.754584   55363 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:42:05.754605   55363 kubeadm.go:157] found existing configuration files:
	
	I0725 18:42:05.754647   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:42:05.763805   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:42:05.763866   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:42:05.772913   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:42:05.782261   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:42:05.782326   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:42:05.791477   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:42:05.800294   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:42:05.800374   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:42:05.809436   55363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:42:05.819069   55363 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:42:05.819118   55363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:42:05.829080   55363 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:42:05.903428   55363 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:42:05.903507   55363 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:42:06.068883   55363 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:42:06.069013   55363 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:42:06.069150   55363 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:42:06.260471   55363 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:42:06.263150   55363 out.go:204]   - Generating certificates and keys ...
	I0725 18:42:06.263255   55363 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:42:06.263354   55363 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:42:06.263430   55363 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:42:06.263512   55363 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:42:06.263607   55363 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:42:06.263680   55363 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:42:06.263735   55363 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:42:06.263826   55363 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:42:06.263937   55363 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:42:06.264053   55363 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:42:06.264125   55363 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:42:06.264199   55363 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:42:06.355034   55363 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:42:06.599751   55363 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:42:06.826783   55363 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:42:07.018500   55363 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:42:07.035467   55363 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:42:07.036096   55363 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:42:07.036175   55363 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:42:07.157333   55363 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:42:07.159056   55363 out.go:204]   - Booting up control plane ...
	I0725 18:42:07.159151   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:42:07.165687   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:42:07.166578   55363 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:42:07.167391   55363 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:42:07.169490   55363 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:42:47.167664   55363 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:42:47.168095   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:42:47.168265   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:42:52.168336   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:42:52.168582   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:43:02.168504   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:43:02.168803   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:43:22.169267   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:43:22.169514   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:44:02.171666   55363 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:44:02.171953   55363 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:44:02.171968   55363 kubeadm.go:310] 
	I0725 18:44:02.172038   55363 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:44:02.172099   55363 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:44:02.172109   55363 kubeadm.go:310] 
	I0725 18:44:02.172146   55363 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:44:02.172186   55363 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:44:02.172371   55363 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:44:02.172393   55363 kubeadm.go:310] 
	I0725 18:44:02.172513   55363 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:44:02.172545   55363 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:44:02.172575   55363 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:44:02.172582   55363 kubeadm.go:310] 
	I0725 18:44:02.172717   55363 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:44:02.172855   55363 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:44:02.172875   55363 kubeadm.go:310] 
	I0725 18:44:02.173029   55363 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:44:02.173140   55363 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:44:02.173271   55363 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:44:02.173372   55363 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:44:02.173382   55363 kubeadm.go:310] 
	I0725 18:44:02.174025   55363 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:44:02.174117   55363 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:44:02.174260   55363 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:44:02.174305   55363 kubeadm.go:394] duration metric: took 3m55.163334882s to StartCluster
	I0725 18:44:02.174352   55363 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:44:02.174416   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:44:02.216866   55363 cri.go:89] found id: ""
	I0725 18:44:02.216894   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.216906   55363 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:44:02.216913   55363 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:44:02.216981   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:44:02.253766   55363 cri.go:89] found id: ""
	I0725 18:44:02.253794   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.253804   55363 logs.go:278] No container was found matching "etcd"
	I0725 18:44:02.253812   55363 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:44:02.253878   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:44:02.289600   55363 cri.go:89] found id: ""
	I0725 18:44:02.289630   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.289641   55363 logs.go:278] No container was found matching "coredns"
	I0725 18:44:02.289648   55363 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:44:02.289702   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:44:02.323435   55363 cri.go:89] found id: ""
	I0725 18:44:02.323464   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.323475   55363 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:44:02.323481   55363 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:44:02.323542   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:44:02.357627   55363 cri.go:89] found id: ""
	I0725 18:44:02.357694   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.357712   55363 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:44:02.357720   55363 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:44:02.357777   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:44:02.401741   55363 cri.go:89] found id: ""
	I0725 18:44:02.401775   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.401787   55363 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:44:02.401794   55363 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:44:02.401857   55363 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:44:02.443438   55363 cri.go:89] found id: ""
	I0725 18:44:02.443471   55363 logs.go:276] 0 containers: []
	W0725 18:44:02.443483   55363 logs.go:278] No container was found matching "kindnet"
	I0725 18:44:02.443493   55363 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:44:02.443507   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:44:02.557700   55363 logs.go:123] Gathering logs for container status ...
	I0725 18:44:02.557732   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:44:02.598788   55363 logs.go:123] Gathering logs for kubelet ...
	I0725 18:44:02.598817   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:44:02.657474   55363 logs.go:123] Gathering logs for dmesg ...
	I0725 18:44:02.657508   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:44:02.672673   55363 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:44:02.672709   55363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:44:02.792744   55363 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0725 18:44:02.792808   55363 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:44:02.792848   55363 out.go:239] * 
	* 
	W0725 18:44:02.792964   55363 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:44:02.792988   55363 out.go:239] * 
	* 
	W0725 18:44:02.793857   55363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:44:02.796582   55363 out.go:177] 
	W0725 18:44:02.797610   55363 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:44:02.797674   55363 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:44:02.797706   55363 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:44:02.799566   55363 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 6 (234.727085ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:03.070712   58537 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-108542" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-371663 --alsologtostderr -v=3
E0725 18:41:58.589775   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-371663 --alsologtostderr -v=3: exit status 82 (2m0.66732621s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-371663"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:41:49.301733   57255 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:41:49.301846   57255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:41:49.301855   57255 out.go:304] Setting ErrFile to fd 2...
	I0725 18:41:49.301859   57255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:41:49.302031   57255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:41:49.302244   57255 out.go:298] Setting JSON to false
	I0725 18:41:49.302308   57255 mustload.go:65] Loading cluster: no-preload-371663
	I0725 18:41:49.302615   57255 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:41:49.302685   57255 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:41:49.302826   57255 mustload.go:65] Loading cluster: no-preload-371663
	I0725 18:41:49.302918   57255 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:41:49.302943   57255 stop.go:39] StopHost: no-preload-371663
	I0725 18:41:49.303316   57255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:41:49.303353   57255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:41:49.317598   57255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36561
	I0725 18:41:49.317988   57255 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:41:49.318607   57255 main.go:141] libmachine: Using API Version  1
	I0725 18:41:49.318637   57255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:41:49.318933   57255 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:41:49.321276   57255 out.go:177] * Stopping node "no-preload-371663"  ...
	I0725 18:41:49.322355   57255 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 18:41:49.322382   57255 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:41:49.322620   57255 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 18:41:49.322639   57255 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:41:49.325810   57255 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:41:49.326217   57255 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:40:13 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:41:49.326244   57255 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:41:49.326384   57255 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:41:49.326530   57255 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:41:49.326683   57255 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:41:49.326848   57255 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:41:49.430913   57255 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 18:41:49.497872   57255 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 18:41:49.556654   57255 main.go:141] libmachine: Stopping "no-preload-371663"...
	I0725 18:41:49.556705   57255 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:41:49.558339   57255 main.go:141] libmachine: (no-preload-371663) Calling .Stop
	I0725 18:41:49.562271   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 0/120
	I0725 18:41:50.563790   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 1/120
	I0725 18:41:51.565250   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 2/120
	I0725 18:41:52.566941   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 3/120
	I0725 18:41:53.568610   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 4/120
	I0725 18:41:54.569978   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 5/120
	I0725 18:41:55.572006   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 6/120
	I0725 18:41:56.573484   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 7/120
	I0725 18:41:57.574587   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 8/120
	I0725 18:41:58.576491   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 9/120
	I0725 18:41:59.578551   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 10/120
	I0725 18:42:00.580633   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 11/120
	I0725 18:42:01.581997   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 12/120
	I0725 18:42:02.584317   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 13/120
	I0725 18:42:03.585787   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 14/120
	I0725 18:42:04.587809   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 15/120
	I0725 18:42:05.589517   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 16/120
	I0725 18:42:06.591255   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 17/120
	I0725 18:42:07.593342   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 18/120
	I0725 18:42:08.594736   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 19/120
	I0725 18:42:09.596689   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 20/120
	I0725 18:42:10.598231   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 21/120
	I0725 18:42:11.599502   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 22/120
	I0725 18:42:12.602058   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 23/120
	I0725 18:42:13.603441   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 24/120
	I0725 18:42:14.605597   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 25/120
	I0725 18:42:15.606983   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 26/120
	I0725 18:42:16.608542   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 27/120
	I0725 18:42:17.609781   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 28/120
	I0725 18:42:18.611187   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 29/120
	I0725 18:42:19.613710   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 30/120
	I0725 18:42:20.615040   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 31/120
	I0725 18:42:21.616497   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 32/120
	I0725 18:42:22.618884   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 33/120
	I0725 18:42:23.620220   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 34/120
	I0725 18:42:24.622072   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 35/120
	I0725 18:42:25.771773   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 36/120
	I0725 18:42:26.773550   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 37/120
	I0725 18:42:27.775247   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 38/120
	I0725 18:42:28.776752   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 39/120
	I0725 18:42:29.779014   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 40/120
	I0725 18:42:30.780642   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 41/120
	I0725 18:42:31.782009   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 42/120
	I0725 18:42:32.783515   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 43/120
	I0725 18:42:33.785089   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 44/120
	I0725 18:42:34.787597   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 45/120
	I0725 18:42:35.789268   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 46/120
	I0725 18:42:36.791466   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 47/120
	I0725 18:42:37.793011   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 48/120
	I0725 18:42:38.794466   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 49/120
	I0725 18:42:39.796896   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 50/120
	I0725 18:42:40.798957   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 51/120
	I0725 18:42:41.800247   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 52/120
	I0725 18:42:42.801757   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 53/120
	I0725 18:42:43.803190   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 54/120
	I0725 18:42:44.805565   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 55/120
	I0725 18:42:45.806988   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 56/120
	I0725 18:42:46.808580   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 57/120
	I0725 18:42:47.810409   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 58/120
	I0725 18:42:48.813268   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 59/120
	I0725 18:42:49.815526   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 60/120
	I0725 18:42:50.817407   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 61/120
	I0725 18:42:51.818898   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 62/120
	I0725 18:42:52.820373   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 63/120
	I0725 18:42:53.821776   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 64/120
	I0725 18:42:54.823969   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 65/120
	I0725 18:42:55.826037   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 66/120
	I0725 18:42:56.827423   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 67/120
	I0725 18:42:57.829037   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 68/120
	I0725 18:42:58.831147   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 69/120
	I0725 18:42:59.833278   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 70/120
	I0725 18:43:00.834826   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 71/120
	I0725 18:43:01.836190   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 72/120
	I0725 18:43:02.837937   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 73/120
	I0725 18:43:03.839710   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 74/120
	I0725 18:43:04.841713   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 75/120
	I0725 18:43:05.843315   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 76/120
	I0725 18:43:06.845115   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 77/120
	I0725 18:43:07.847081   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 78/120
	I0725 18:43:08.849139   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 79/120
	I0725 18:43:09.851210   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 80/120
	I0725 18:43:10.852934   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 81/120
	I0725 18:43:11.854553   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 82/120
	I0725 18:43:12.856014   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 83/120
	I0725 18:43:13.857867   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 84/120
	I0725 18:43:14.859822   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 85/120
	I0725 18:43:15.861302   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 86/120
	I0725 18:43:16.862956   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 87/120
	I0725 18:43:17.864107   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 88/120
	I0725 18:43:18.865671   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 89/120
	I0725 18:43:19.867824   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 90/120
	I0725 18:43:20.869146   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 91/120
	I0725 18:43:21.870452   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 92/120
	I0725 18:43:22.871930   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 93/120
	I0725 18:43:23.873556   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 94/120
	I0725 18:43:24.875617   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 95/120
	I0725 18:43:25.876968   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 96/120
	I0725 18:43:26.878638   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 97/120
	I0725 18:43:27.880419   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 98/120
	I0725 18:43:28.883105   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 99/120
	I0725 18:43:29.885411   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 100/120
	I0725 18:43:30.887629   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 101/120
	I0725 18:43:31.889546   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 102/120
	I0725 18:43:32.891173   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 103/120
	I0725 18:43:33.893504   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 104/120
	I0725 18:43:34.895589   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 105/120
	I0725 18:43:35.897315   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 106/120
	I0725 18:43:36.898816   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 107/120
	I0725 18:43:37.900287   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 108/120
	I0725 18:43:38.901730   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 109/120
	I0725 18:43:39.903683   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 110/120
	I0725 18:43:40.905278   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 111/120
	I0725 18:43:41.906814   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 112/120
	I0725 18:43:42.908341   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 113/120
	I0725 18:43:43.909879   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 114/120
	I0725 18:43:44.911699   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 115/120
	I0725 18:43:45.913034   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 116/120
	I0725 18:43:46.915061   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 117/120
	I0725 18:43:47.916786   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 118/120
	I0725 18:43:48.919186   57255 main.go:141] libmachine: (no-preload-371663) Waiting for machine to stop 119/120
	I0725 18:43:49.920202   57255 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 18:43:49.920276   57255 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0725 18:43:49.922263   57255 out.go:177] 
	W0725 18:43:49.923618   57255 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0725 18:43:49.923634   57255 out.go:239] * 
	* 
	W0725 18:43:49.926167   57255 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:43:49.927479   57255 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-371663 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663: exit status 3 (18.552960464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:08.480686   58413 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host
	E0725 18:44:08.480709   58413 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-371663" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-600433 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-600433 --alsologtostderr -v=3: exit status 82 (2m0.906616634s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-600433"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:42:17.116456   57509 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:42:17.116832   57509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:42:17.116849   57509 out.go:304] Setting ErrFile to fd 2...
	I0725 18:42:17.116858   57509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:42:17.117209   57509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:42:17.117561   57509 out.go:298] Setting JSON to false
	I0725 18:42:17.117680   57509 mustload.go:65] Loading cluster: default-k8s-diff-port-600433
	I0725 18:42:17.118208   57509 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:42:17.118327   57509 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:42:17.118600   57509 mustload.go:65] Loading cluster: default-k8s-diff-port-600433
	I0725 18:42:17.118775   57509 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:42:17.118826   57509 stop.go:39] StopHost: default-k8s-diff-port-600433
	I0725 18:42:17.119430   57509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:42:17.119487   57509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:42:17.134824   57509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0725 18:42:17.135398   57509 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:42:17.136044   57509 main.go:141] libmachine: Using API Version  1
	I0725 18:42:17.136075   57509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:42:17.136405   57509 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:42:17.139167   57509 out.go:177] * Stopping node "default-k8s-diff-port-600433"  ...
	I0725 18:42:17.140564   57509 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 18:42:17.140594   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:42:17.140832   57509 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 18:42:17.140861   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:42:17.143777   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:42:17.144318   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:42:17.144359   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:42:17.144576   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:42:17.144793   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:42:17.144983   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:42:17.145131   57509 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:42:17.240682   57509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 18:42:17.301356   57509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 18:42:17.364415   57509 main.go:141] libmachine: Stopping "default-k8s-diff-port-600433"...
	I0725 18:42:17.364462   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:42:17.366182   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Stop
	I0725 18:42:17.369921   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 0/120
	I0725 18:42:18.371463   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 1/120
	I0725 18:42:19.372954   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 2/120
	I0725 18:42:20.374128   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 3/120
	I0725 18:42:21.375609   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 4/120
	I0725 18:42:22.377944   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 5/120
	I0725 18:42:23.379470   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 6/120
	I0725 18:42:24.381014   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 7/120
	I0725 18:42:25.771562   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 8/120
	I0725 18:42:26.773314   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 9/120
	I0725 18:42:27.775380   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 10/120
	I0725 18:42:28.776887   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 11/120
	I0725 18:42:29.779201   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 12/120
	I0725 18:42:30.780816   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 13/120
	I0725 18:42:31.782870   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 14/120
	I0725 18:42:32.784556   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 15/120
	I0725 18:42:33.785822   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 16/120
	I0725 18:42:34.787410   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 17/120
	I0725 18:42:35.789055   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 18/120
	I0725 18:42:36.790958   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 19/120
	I0725 18:42:37.793222   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 20/120
	I0725 18:42:38.794653   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 21/120
	I0725 18:42:39.797190   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 22/120
	I0725 18:42:40.799518   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 23/120
	I0725 18:42:41.801016   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 24/120
	I0725 18:42:42.802237   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 25/120
	I0725 18:42:43.803729   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 26/120
	I0725 18:42:44.806053   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 27/120
	I0725 18:42:45.807230   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 28/120
	I0725 18:42:46.808873   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 29/120
	I0725 18:42:47.810991   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 30/120
	I0725 18:42:48.813604   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 31/120
	I0725 18:42:49.815179   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 32/120
	I0725 18:42:50.816791   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 33/120
	I0725 18:42:51.818246   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 34/120
	I0725 18:42:52.820085   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 35/120
	I0725 18:42:53.821483   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 36/120
	I0725 18:42:54.823713   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 37/120
	I0725 18:42:55.825391   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 38/120
	I0725 18:42:56.827145   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 39/120
	I0725 18:42:57.829278   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 40/120
	I0725 18:42:58.831325   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 41/120
	I0725 18:42:59.833007   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 42/120
	I0725 18:43:00.835118   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 43/120
	I0725 18:43:01.836588   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 44/120
	I0725 18:43:02.838513   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 45/120
	I0725 18:43:03.839948   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 46/120
	I0725 18:43:04.841524   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 47/120
	I0725 18:43:05.842916   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 48/120
	I0725 18:43:06.844805   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 49/120
	I0725 18:43:07.847421   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 50/120
	I0725 18:43:08.849680   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 51/120
	I0725 18:43:09.851546   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 52/120
	I0725 18:43:10.853459   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 53/120
	I0725 18:43:11.854858   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 54/120
	I0725 18:43:12.857144   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 55/120
	I0725 18:43:13.858489   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 56/120
	I0725 18:43:14.860150   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 57/120
	I0725 18:43:15.861695   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 58/120
	I0725 18:43:16.863418   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 59/120
	I0725 18:43:17.865240   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 60/120
	I0725 18:43:18.866433   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 61/120
	I0725 18:43:19.868145   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 62/120
	I0725 18:43:20.869435   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 63/120
	I0725 18:43:21.870772   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 64/120
	I0725 18:43:22.872677   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 65/120
	I0725 18:43:23.874103   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 66/120
	I0725 18:43:24.875518   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 67/120
	I0725 18:43:25.877292   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 68/120
	I0725 18:43:26.879454   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 69/120
	I0725 18:43:27.881502   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 70/120
	I0725 18:43:28.883111   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 71/120
	I0725 18:43:29.885016   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 72/120
	I0725 18:43:30.887436   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 73/120
	I0725 18:43:31.888892   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 74/120
	I0725 18:43:32.891012   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 75/120
	I0725 18:43:33.893303   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 76/120
	I0725 18:43:34.894789   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 77/120
	I0725 18:43:35.896415   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 78/120
	I0725 18:43:36.898183   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 79/120
	I0725 18:43:37.900287   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 80/120
	I0725 18:43:38.901768   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 81/120
	I0725 18:43:39.903319   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 82/120
	I0725 18:43:40.905115   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 83/120
	I0725 18:43:41.906430   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 84/120
	I0725 18:43:42.908174   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 85/120
	I0725 18:43:43.909563   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 86/120
	I0725 18:43:44.911063   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 87/120
	I0725 18:43:45.912703   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 88/120
	I0725 18:43:46.915204   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 89/120
	I0725 18:43:47.917631   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 90/120
	I0725 18:43:48.919589   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 91/120
	I0725 18:43:49.921416   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 92/120
	I0725 18:43:50.923214   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 93/120
	I0725 18:43:51.924861   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 94/120
	I0725 18:43:52.926961   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 95/120
	I0725 18:43:53.928455   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 96/120
	I0725 18:43:54.929952   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 97/120
	I0725 18:43:55.931479   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 98/120
	I0725 18:43:56.932849   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 99/120
	I0725 18:43:57.935219   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 100/120
	I0725 18:43:58.936733   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 101/120
	I0725 18:43:59.938194   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 102/120
	I0725 18:44:00.939339   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 103/120
	I0725 18:44:01.940831   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 104/120
	I0725 18:44:02.942597   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 105/120
	I0725 18:44:03.944257   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 106/120
	I0725 18:44:04.945987   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 107/120
	I0725 18:44:05.947942   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 108/120
	I0725 18:44:06.949339   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 109/120
	I0725 18:44:07.951410   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 110/120
	I0725 18:44:08.953593   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 111/120
	I0725 18:44:09.954815   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 112/120
	I0725 18:44:10.956205   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 113/120
	I0725 18:44:11.957684   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 114/120
	I0725 18:44:12.959566   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 115/120
	I0725 18:44:13.961172   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 116/120
	I0725 18:44:14.962870   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 117/120
	I0725 18:44:15.965148   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 118/120
	I0725 18:44:16.966881   57509 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for machine to stop 119/120
	I0725 18:44:17.967415   57509 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 18:44:17.967473   57509 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0725 18:44:17.969246   57509 out.go:177] 
	W0725 18:44:17.970478   57509 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0725 18:44:17.970492   57509 out.go:239] * 
	* 
	W0725 18:44:17.973052   57509 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:44:17.974496   57509 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-600433 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433: exit status 3 (18.664362405s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:36.640647   59332 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0725 18:44:36.640671   59332 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-600433" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-108542 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-108542 create -f testdata/busybox.yaml: exit status 1 (47.260778ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-108542" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-108542 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 6 (235.499885ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:03.358860   58576 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-108542" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 6 (233.468179ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:03.588207   58605 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-108542" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-108542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-108542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m38.022813189s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-108542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-108542 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-108542 describe deploy/metrics-server -n kube-system: exit status 1 (42.846136ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-108542" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-108542 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 6 (217.093512ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:45:41.875061   60022 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-108542" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663: exit status 3 (3.200176865s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:11.680622   58931 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host
	E0725 18:44:11.680636   58931 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-371663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0725 18:44:12.056797   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-371663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152090244s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-371663 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663: exit status 3 (3.062626417s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:20.896758   59302 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host
	E0725 18:44:20.896810   59302 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.62:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-371663" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433: exit status 3 (3.169046669s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:39.808631   59529 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0725 18:44:39.808652   59529 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-600433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-600433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.559702725s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-600433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433: exit status 3 (2.65498791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:44:49.024798   59614 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host
	E0725 18:44:49.024817   59614 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.221:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-600433" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-646344 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-646344 --alsologtostderr -v=3: exit status 82 (2m0.493419412s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-646344"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:45:19.882564   59929 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:45:19.882838   59929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:45:19.882849   59929 out.go:304] Setting ErrFile to fd 2...
	I0725 18:45:19.882854   59929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:45:19.883046   59929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:45:19.883251   59929 out.go:298] Setting JSON to false
	I0725 18:45:19.883324   59929 mustload.go:65] Loading cluster: embed-certs-646344
	I0725 18:45:19.883633   59929 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:45:19.883701   59929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:45:19.883850   59929 mustload.go:65] Loading cluster: embed-certs-646344
	I0725 18:45:19.883943   59929 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:45:19.883968   59929 stop.go:39] StopHost: embed-certs-646344
	I0725 18:45:19.884368   59929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:45:19.884408   59929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:45:19.899081   59929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0725 18:45:19.899543   59929 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:45:19.900035   59929 main.go:141] libmachine: Using API Version  1
	I0725 18:45:19.900060   59929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:45:19.900482   59929 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:45:19.902866   59929 out.go:177] * Stopping node "embed-certs-646344"  ...
	I0725 18:45:19.903985   59929 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0725 18:45:19.904023   59929 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:45:19.904257   59929 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0725 18:45:19.904282   59929 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:45:19.907393   59929 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:45:19.907831   59929 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:44:24 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:45:19.907854   59929 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:45:19.908049   59929 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:45:19.908267   59929 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:45:19.908456   59929 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:45:19.908605   59929 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:45:19.998993   59929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0725 18:45:20.071003   59929 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0725 18:45:20.128987   59929 main.go:141] libmachine: Stopping "embed-certs-646344"...
	I0725 18:45:20.129016   59929 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:45:20.130509   59929 main.go:141] libmachine: (embed-certs-646344) Calling .Stop
	I0725 18:45:20.134202   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 0/120
	I0725 18:45:21.136010   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 1/120
	I0725 18:45:22.137339   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 2/120
	I0725 18:45:23.138931   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 3/120
	I0725 18:45:24.140134   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 4/120
	I0725 18:45:25.142413   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 5/120
	I0725 18:45:26.143884   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 6/120
	I0725 18:45:27.145580   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 7/120
	I0725 18:45:28.146928   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 8/120
	I0725 18:45:29.148653   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 9/120
	I0725 18:45:30.150476   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 10/120
	I0725 18:45:31.152070   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 11/120
	I0725 18:45:32.153641   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 12/120
	I0725 18:45:33.154918   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 13/120
	I0725 18:45:34.156107   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 14/120
	I0725 18:45:35.158466   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 15/120
	I0725 18:45:36.159667   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 16/120
	I0725 18:45:37.161122   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 17/120
	I0725 18:45:38.162551   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 18/120
	I0725 18:45:39.163981   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 19/120
	I0725 18:45:40.166368   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 20/120
	I0725 18:45:41.167808   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 21/120
	I0725 18:45:42.169065   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 22/120
	I0725 18:45:43.170297   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 23/120
	I0725 18:45:44.171519   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 24/120
	I0725 18:45:45.173382   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 25/120
	I0725 18:45:46.174929   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 26/120
	I0725 18:45:47.176606   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 27/120
	I0725 18:45:48.178176   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 28/120
	I0725 18:45:49.179621   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 29/120
	I0725 18:45:50.182029   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 30/120
	I0725 18:45:51.183363   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 31/120
	I0725 18:45:52.184828   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 32/120
	I0725 18:45:53.186367   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 33/120
	I0725 18:45:54.187741   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 34/120
	I0725 18:45:55.189792   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 35/120
	I0725 18:45:56.191100   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 36/120
	I0725 18:45:57.192577   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 37/120
	I0725 18:45:58.194150   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 38/120
	I0725 18:45:59.195674   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 39/120
	I0725 18:46:00.198043   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 40/120
	I0725 18:46:01.199713   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 41/120
	I0725 18:46:02.201503   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 42/120
	I0725 18:46:03.203007   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 43/120
	I0725 18:46:04.204532   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 44/120
	I0725 18:46:05.206548   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 45/120
	I0725 18:46:06.207948   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 46/120
	I0725 18:46:07.210448   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 47/120
	I0725 18:46:08.212110   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 48/120
	I0725 18:46:09.213545   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 49/120
	I0725 18:46:10.215814   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 50/120
	I0725 18:46:11.217406   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 51/120
	I0725 18:46:12.218914   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 52/120
	I0725 18:46:13.220972   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 53/120
	I0725 18:46:14.222645   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 54/120
	I0725 18:46:15.224874   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 55/120
	I0725 18:46:16.226504   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 56/120
	I0725 18:46:17.227926   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 57/120
	I0725 18:46:18.229817   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 58/120
	I0725 18:46:19.231282   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 59/120
	I0725 18:46:20.233391   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 60/120
	I0725 18:46:21.234831   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 61/120
	I0725 18:46:22.236317   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 62/120
	I0725 18:46:23.237733   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 63/120
	I0725 18:46:24.239130   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 64/120
	I0725 18:46:25.241296   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 65/120
	I0725 18:46:26.242700   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 66/120
	I0725 18:46:27.244160   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 67/120
	I0725 18:46:28.245688   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 68/120
	I0725 18:46:29.247248   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 69/120
	I0725 18:46:30.249606   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 70/120
	I0725 18:46:31.250983   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 71/120
	I0725 18:46:32.252310   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 72/120
	I0725 18:46:33.253571   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 73/120
	I0725 18:46:34.255086   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 74/120
	I0725 18:46:35.257250   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 75/120
	I0725 18:46:36.258642   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 76/120
	I0725 18:46:37.260076   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 77/120
	I0725 18:46:38.261385   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 78/120
	I0725 18:46:39.262744   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 79/120
	I0725 18:46:40.264818   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 80/120
	I0725 18:46:41.266341   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 81/120
	I0725 18:46:42.267790   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 82/120
	I0725 18:46:43.269293   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 83/120
	I0725 18:46:44.270660   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 84/120
	I0725 18:46:45.272709   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 85/120
	I0725 18:46:46.274322   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 86/120
	I0725 18:46:47.275848   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 87/120
	I0725 18:46:48.277287   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 88/120
	I0725 18:46:49.278919   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 89/120
	I0725 18:46:50.281100   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 90/120
	I0725 18:46:51.282891   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 91/120
	I0725 18:46:52.284310   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 92/120
	I0725 18:46:53.286051   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 93/120
	I0725 18:46:54.287621   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 94/120
	I0725 18:46:55.289628   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 95/120
	I0725 18:46:56.290910   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 96/120
	I0725 18:46:57.292444   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 97/120
	I0725 18:46:58.294484   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 98/120
	I0725 18:46:59.296046   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 99/120
	I0725 18:47:00.298274   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 100/120
	I0725 18:47:01.299741   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 101/120
	I0725 18:47:02.301204   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 102/120
	I0725 18:47:03.302723   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 103/120
	I0725 18:47:04.304303   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 104/120
	I0725 18:47:05.306428   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 105/120
	I0725 18:47:06.307725   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 106/120
	I0725 18:47:07.309203   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 107/120
	I0725 18:47:08.310499   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 108/120
	I0725 18:47:09.311870   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 109/120
	I0725 18:47:10.313363   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 110/120
	I0725 18:47:11.314742   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 111/120
	I0725 18:47:12.316160   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 112/120
	I0725 18:47:13.317616   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 113/120
	I0725 18:47:14.319045   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 114/120
	I0725 18:47:15.321147   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 115/120
	I0725 18:47:16.322522   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 116/120
	I0725 18:47:17.323964   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 117/120
	I0725 18:47:18.325388   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 118/120
	I0725 18:47:19.326958   59929 main.go:141] libmachine: (embed-certs-646344) Waiting for machine to stop 119/120
	I0725 18:47:20.327821   59929 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0725 18:47:20.327866   59929 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0725 18:47:20.329680   59929 out.go:177] 
	W0725 18:47:20.330962   59929 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0725 18:47:20.330980   59929 out.go:239] * 
	* 
	W0725 18:47:20.333560   59929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:47:20.334790   59929 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-646344 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344: exit status 3 (18.576460044s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:47:38.912691   60519 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host
	E0725 18:47:38.912716   60519 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-646344" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (749.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0725 18:46:58.590065   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m25.657802269s)

                                                
                                                
-- stdout --
	* [old-k8s-version-108542] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-108542" primary control-plane node in "old-k8s-version-108542" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:45:46.382450   60176 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:45:46.382565   60176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:45:46.382574   60176 out.go:304] Setting ErrFile to fd 2...
	I0725 18:45:46.382581   60176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:45:46.382762   60176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:45:46.383299   60176 out.go:298] Setting JSON to false
	I0725 18:45:46.384207   60176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5290,"bootTime":1721927856,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:45:46.384262   60176 start.go:139] virtualization: kvm guest
	I0725 18:45:46.386522   60176 out.go:177] * [old-k8s-version-108542] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:45:46.387822   60176 notify.go:220] Checking for updates...
	I0725 18:45:46.387843   60176 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:45:46.389215   60176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:45:46.390500   60176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:45:46.391763   60176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:45:46.392918   60176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:45:46.394125   60176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:45:46.395672   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:45:46.396096   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:45:46.396129   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:45:46.410659   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0725 18:45:46.411065   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:45:46.411641   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:45:46.411673   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:45:46.411984   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:45:46.412171   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:45:46.413958   60176 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 18:45:46.415125   60176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:45:46.415425   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:45:46.415462   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:45:46.429665   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0725 18:45:46.430071   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:45:46.430551   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:45:46.430573   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:45:46.430818   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:45:46.430971   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:45:46.464857   60176 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:45:46.466155   60176 start.go:297] selected driver: kvm2
	I0725 18:45:46.466171   60176 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:45:46.466298   60176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:45:46.467293   60176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:45:46.467374   60176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:45:46.481695   60176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:45:46.482058   60176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:45:46.482119   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:45:46.482133   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:45:46.482175   60176 start.go:340] cluster config:
	{Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:45:46.482282   60176 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:45:46.483909   60176 out.go:177] * Starting "old-k8s-version-108542" primary control-plane node in "old-k8s-version-108542" cluster
	I0725 18:45:46.484969   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:45:46.484999   60176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0725 18:45:46.485007   60176 cache.go:56] Caching tarball of preloaded images
	I0725 18:45:46.485077   60176 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:45:46.485087   60176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0725 18:45:46.485175   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:45:46.485340   60176 start.go:360] acquireMachinesLock for old-k8s-version-108542: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	* 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	* 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-108542 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (227.862315ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25: (1.556261742s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.820081903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721933893820056387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7f08a37-b8b3-440c-9814-733e19538396 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.820879379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29b08654-3f7f-46e2-8703-d769b0a25210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.820948039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29b08654-3f7f-46e2-8703-d769b0a25210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.820994711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=29b08654-3f7f-46e2-8703-d769b0a25210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.854192034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f367ac18-d8ad-4292-a096-37b643a06eda name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.854289274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f367ac18-d8ad-4292-a096-37b643a06eda name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.856403528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dda9f4b-f4a7-4cf9-9ca1-e12050b759bb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.856982050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721933893856943271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dda9f4b-f4a7-4cf9-9ca1-e12050b759bb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.857799441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99fc8bed-9ae2-41ef-b14d-4fe53cbc55c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.857869460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99fc8bed-9ae2-41ef-b14d-4fe53cbc55c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.857914803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99fc8bed-9ae2-41ef-b14d-4fe53cbc55c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.891531364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a100d63-fc57-4367-95bf-699f8c324c72 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.891640436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a100d63-fc57-4367-95bf-699f8c324c72 name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.892903071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be64d671-a716-4609-82e2-0e0409465fee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.893514014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721933893893480640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be64d671-a716-4609-82e2-0e0409465fee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.894332101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f4d0f3d-5736-4569-a543-d0c76cf95ecd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.894414699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f4d0f3d-5736-4569-a543-d0c76cf95ecd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.894461599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1f4d0f3d-5736-4569-a543-d0c76cf95ecd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.928031270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=667f0c4b-d649-43b7-9b24-62453c03840c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.928927091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=667f0c4b-d649-43b7-9b24-62453c03840c name=/runtime.v1.RuntimeService/Version
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.931935257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4973b316-f8ae-4ce7-9eda-134bc299f771 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.932543663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721933893932507610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4973b316-f8ae-4ce7-9eda-134bc299f771 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.934884433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d072809-bb9a-48b5-80d7-78e3f7837089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.935005098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d072809-bb9a-48b5-80d7-78e3f7837089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 18:58:13 old-k8s-version-108542 crio[648]: time="2024-07-25 18:58:13.935064301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6d072809-bb9a-48b5-80d7-78e3f7837089 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055343] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037717] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863537] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.917310] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.440772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.925882] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062742] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.199961] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129009] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312354] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul25 18:50] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.085718] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.193987] kauditd_printk_skb: 46 callbacks suppressed
	[Jul25 18:54] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Jul25 18:56] systemd-fstab-generator[5371]: Ignoring "noauto" option for root device
	[  +0.066840] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:58:14 up 8 min,  0 users,  load average: 0.00, 0.08, 0.06
	Linux old-k8s-version-108542 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00098ddd0)
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: goroutine 168 [select]:
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b77ef0, 0x4f0ac20, 0xc0000519f0, 0x1, 0xc00009e0c0)
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008dcc40, 0xc00009e0c0)
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009deba0, 0xc000b6a040)
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 25 18:58:11 old-k8s-version-108542 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 18:58:11 old-k8s-version-108542 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 18:58:11 old-k8s-version-108542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 25 18:58:11 old-k8s-version-108542 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 18:58:11 old-k8s-version-108542 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5600]: I0725 18:58:11.772410    5600 server.go:416] Version: v1.20.0
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5600]: I0725 18:58:11.772652    5600 server.go:837] Client rotation is on, will bootstrap in background
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5600]: I0725 18:58:11.774608    5600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5600]: W0725 18:58:11.775469    5600 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 25 18:58:11 old-k8s-version-108542 kubelet[5600]: I0725 18:58:11.775896    5600 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (226.018221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-108542" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (749.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344: exit status 3 (3.167675832s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:47:42.080700   60597 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host
	E0725 18:47:42.080719   60597 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-646344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-646344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151845376s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-646344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344: exit status 3 (3.063819138s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 18:47:51.296696   60686 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host
	E0725 18:47:51.296722   60686 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.133:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-646344" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:03:18.017634985 +0000 UTC m=+5672.098366841
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-600433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-600433 logs -n 25: (2.059722089s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.577749773Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934199577718925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9becf7e2-54b4-41cd-8f9e-18550564ba34 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.578358022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fad8c7e-fc23-482d-a935-821ca6918416 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.578462735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fad8c7e-fc23-482d-a935-821ca6918416 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.578747557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fad8c7e-fc23-482d-a935-821ca6918416 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.618620535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=398da70f-b4a7-4f42-951c-df6260c38b5e name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.618713703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=398da70f-b4a7-4f42-951c-df6260c38b5e name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.620087323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d0625c7-4894-404d-9d93-74cb694fb196 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.620999801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934199620963435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d0625c7-4894-404d-9d93-74cb694fb196 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.624527485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25dd144c-e405-4adc-b39f-d1d7e9dee574 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.624686168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25dd144c-e405-4adc-b39f-d1d7e9dee574 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.625073395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25dd144c-e405-4adc-b39f-d1d7e9dee574 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.661317772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ea52961-262d-4859-9177-28402e29e01b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.661413679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ea52961-262d-4859-9177-28402e29e01b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.662878489Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45f53158-3ff0-4702-a468-cc38fcf6163c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.663372581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934199663348000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45f53158-3ff0-4702-a468-cc38fcf6163c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.663827114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ccdfc9d-1721-4280-81ab-0dacefdc5ca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.663902375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ccdfc9d-1721-4280-81ab-0dacefdc5ca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.664103582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ccdfc9d-1721-4280-81ab-0dacefdc5ca3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.694103348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=969e53bb-5c6f-407b-b994-01b880b96f00 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.694274349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=969e53bb-5c6f-407b-b994-01b880b96f00 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.695329211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=004a39f5-e95f-4bc9-be7e-e2f6e3aeedec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.695866655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934199695842390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=004a39f5-e95f-4bc9-be7e-e2f6e3aeedec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.696419204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b49f2432-19fd-42ce-9b28-3379e1b985dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.696480933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b49f2432-19fd-42ce-9b28-3379e1b985dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:19 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:03:19.696673021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b49f2432-19fd-42ce-9b28-3379e1b985dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2387f4d44d2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   b9ae6dd1fced9       storage-provisioner
	3ab6e673af882       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   661084db9f623       busybox
	b64c5166c6547       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   9870d6544db5a       coredns-7db6d8ff4d-mfjzs
	ef20f38592f5c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   c1147915f2910       kube-proxy-smhmv
	070dd1b58b01a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   b9ae6dd1fced9       storage-provisioner
	de5e9269d9497       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   e6265e24cb556       kube-controller-manager-default-k8s-diff-port-600433
	b6b7ff25c3f04       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   684b58e7e432d       kube-apiserver-default-k8s-diff-port-600433
	0c03165e87eac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   63d0cb46a9466       kube-scheduler-default-k8s-diff-port-600433
	45aafe613d91f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   cacbb00e2ed6e       etcd-default-k8s-diff-port-600433
	
	
	==> coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41830 - 57833 "HINFO IN 5102535641444002316.296120937777839854. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011398844s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-600433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-600433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=default-k8s-diff-port-600433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_41_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:41:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-600433
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:03:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:00:30 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:00:30 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:00:30 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:00:30 +0000   Thu, 25 Jul 2024 18:49:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    default-k8s-diff-port-600433
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bc20fba7e1f4954abf42c564b7b937b
	  System UUID:                1bc20fba-7e1f-4954-abf4-2c564b7b937b
	  Boot ID:                    827e04e5-2063-444f-a88c-3db4783360ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-mfjzs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-600433                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-600433             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-600433    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-smhmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-600433             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-5js8s                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-600433 event: Registered Node default-k8s-diff-port-600433 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-600433 event: Registered Node default-k8s-diff-port-600433 in Controller
	
	
	==> dmesg <==
	[Jul25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051215] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.682507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.794889] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.512917] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.306395] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.056245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060594] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.182710] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.160538] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.285715] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.208841] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +1.708600] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.065203] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.503494] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.932285] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.760882] kauditd_printk_skb: 62 callbacks suppressed
	[Jul25 18:50] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] <==
	{"level":"warn","ts":"2024-07-25T18:50:04.232273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.970778ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7552132965116899477 > lease_revoke:<id:68ce90eb32f401c2>","response":"size:27"}
	{"level":"info","ts":"2024-07-25T18:50:04.232371Z","caller":"traceutil/trace.go:171","msg":"trace[1328370088] linearizableReadLoop","detail":"{readStateIndex:578; appliedIndex:576; }","duration":"466.250799ms","start":"2024-07-25T18:50:03.766099Z","end":"2024-07-25T18:50:04.232349Z","steps":["trace[1328370088] 'read index received'  (duration: 79.66646ms)","trace[1328370088] 'applied index is now lower than readState.Index'  (duration: 386.583092ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:50:04.232513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"466.393673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" ","response":"range_response_count:1 size:4520"}
	{"level":"info","ts":"2024-07-25T18:50:04.232562Z","caller":"traceutil/trace.go:171","msg":"trace[1575564916] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433; range_end:; response_count:1; response_revision:543; }","duration":"466.514995ms","start":"2024-07-25T18:50:03.766032Z","end":"2024-07-25T18:50:04.232547Z","steps":["trace[1575564916] 'agreement among raft nodes before linearized reading'  (duration: 466.370227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:04.23261Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T18:50:03.766016Z","time spent":"466.582615ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4542,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" "}
	{"level":"info","ts":"2024-07-25T18:50:04.777788Z","caller":"traceutil/trace.go:171","msg":"trace[802680494] transaction","detail":"{read_only:false; response_revision:544; number_of_response:1; }","duration":"282.349245ms","start":"2024-07-25T18:50:04.495424Z","end":"2024-07-25T18:50:04.777773Z","steps":["trace[802680494] 'process raft request'  (duration: 282.226923ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:05.25976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.463337ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7552132965116899484 > lease_revoke:<id:68ce90eb3a4f6f6a>","response":"size:27"}
	{"level":"info","ts":"2024-07-25T18:50:05.259994Z","caller":"traceutil/trace.go:171","msg":"trace[1434571863] linearizableReadLoop","detail":"{readStateIndex:580; appliedIndex:579; }","duration":"493.336624ms","start":"2024-07-25T18:50:04.766643Z","end":"2024-07-25T18:50:05.259979Z","steps":["trace[1434571863] 'read index received'  (duration: 11.435511ms)","trace[1434571863] 'applied index is now lower than readState.Index'  (duration: 481.900142ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T18:50:05.260218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"493.575182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" ","response":"range_response_count:1 size:4520"}
	{"level":"info","ts":"2024-07-25T18:50:05.260427Z","caller":"traceutil/trace.go:171","msg":"trace[1858493967] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433; range_end:; response_count:1; response_revision:544; }","duration":"493.82649ms","start":"2024-07-25T18:50:04.766593Z","end":"2024-07-25T18:50:05.260419Z","steps":["trace[1858493967] 'agreement among raft nodes before linearized reading'  (duration: 493.453206ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:05.260476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T18:50:04.766575Z","time spent":"493.894964ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4542,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" "}
	{"level":"warn","ts":"2024-07-25T18:50:05.260391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"477.999613ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" ","response":"range_response_count:1 size:4520"}
	{"level":"info","ts":"2024-07-25T18:50:05.260819Z","caller":"traceutil/trace.go:171","msg":"trace[921465976] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433; range_end:; response_count:1; response_revision:544; }","duration":"478.451975ms","start":"2024-07-25T18:50:04.782358Z","end":"2024-07-25T18:50:05.26081Z","steps":["trace[921465976] 'agreement among raft nodes before linearized reading'  (duration: 477.965176ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:05.260876Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T18:50:04.782345Z","time spent":"478.52055ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":4542,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-600433\" "}
	{"level":"info","ts":"2024-07-25T18:50:22.21475Z","caller":"traceutil/trace.go:171","msg":"trace[774573672] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"440.808971ms","start":"2024-07-25T18:50:21.773908Z","end":"2024-07-25T18:50:22.214717Z","steps":["trace[774573672] 'read index received'  (duration: 440.629606ms)","trace[774573672] 'applied index is now lower than readState.Index'  (duration: 178.688µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T18:50:22.215366Z","caller":"traceutil/trace.go:171","msg":"trace[206308122] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"657.921296ms","start":"2024-07-25T18:50:21.557424Z","end":"2024-07-25T18:50:22.215345Z","steps":["trace[206308122] 'process raft request'  (duration: 657.180341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:22.217157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T18:50:21.557409Z","time spent":"658.234209ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3832,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:561 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3778 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2024-07-25T18:50:22.21551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"441.566107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5js8s\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-07-25T18:50:22.217506Z","caller":"traceutil/trace.go:171","msg":"trace[2032718435] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5js8s; range_end:; response_count:1; response_revision:566; }","duration":"443.605989ms","start":"2024-07-25T18:50:21.773884Z","end":"2024-07-25T18:50:22.21749Z","steps":["trace[2032718435] 'agreement among raft nodes before linearized reading'  (duration: 441.509309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T18:50:22.217598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T18:50:21.773872Z","time spent":"443.713846ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4315,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5js8s\" "}
	{"level":"warn","ts":"2024-07-25T18:50:22.477497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.019614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-5js8s\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-07-25T18:50:22.477588Z","caller":"traceutil/trace.go:171","msg":"trace[1611525272] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-5js8s; range_end:; response_count:1; response_revision:566; }","duration":"203.155672ms","start":"2024-07-25T18:50:22.274419Z","end":"2024-07-25T18:50:22.477574Z","steps":["trace[1611525272] 'range keys from in-memory index tree'  (duration: 202.761769ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T18:59:47.229024Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":797}
	{"level":"info","ts":"2024-07-25T18:59:47.239792Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":797,"took":"10.219215ms","hash":755398250,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2301952,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-25T18:59:47.239903Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":755398250,"revision":797,"compact-revision":-1}
	
	
	==> kernel <==
	 19:03:20 up 14 min,  0 users,  load average: 0.26, 0.12, 0.08
	Linux default-k8s-diff-port-600433 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] <==
	I0725 18:57:49.461939       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 18:59:48.464780       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 18:59:48.464893       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0725 18:59:49.465973       1 handler_proxy.go:93] no RequestInfo found in the context
	W0725 18:59:49.466022       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 18:59:49.466250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 18:59:49.466278       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0725 18:59:49.466090       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 18:59:49.468309       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:00:49.467024       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:00:49.467115       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:00:49.467158       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:00:49.469269       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:00:49.469367       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:00:49.469405       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:02:49.467931       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:02:49.468047       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:02:49.468058       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:02:49.470322       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:02:49.470356       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:02:49.470364       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] <==
	I0725 18:57:31.717429       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:58:01.253782       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:58:01.727265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:58:31.261091       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:58:31.735313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:01.266398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:59:01.742874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:31.271837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:59:31.750023       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:01.280402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:00:01.759026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:31.284900       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:00:31.768053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:00:59.334902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="281.525µs"
	E0725 19:01:01.289697       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:01:01.775367       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:01:10.336662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="137.591µs"
	E0725 19:01:31.294400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:01:31.784363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:02:01.299837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:02:01.791228       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:02:31.304914       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:02:31.798208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:03:01.310618       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:03:01.805760       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] <==
	I0725 18:49:49.931593       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:49:49.947918       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.221"]
	I0725 18:49:50.008178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:49:50.008225       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:49:50.008242       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:49:50.011236       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:49:50.011434       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:49:50.011445       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:49:50.013106       1 config.go:192] "Starting service config controller"
	I0725 18:49:50.013172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:49:50.013195       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:49:50.013199       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:49:50.013663       1 config.go:319] "Starting node config controller"
	I0725 18:49:50.013684       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:49:50.114940       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:49:50.115280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:49:50.116410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] <==
	I0725 18:49:46.086594       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:49:48.419056       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:49:48.419149       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:49:48.419161       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:49:48.419167       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:49:48.486402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:49:48.486435       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:49:48.487958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:49:48.488037       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:49:48.488065       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:49:48.488081       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:49:48.589794       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:00:44 default-k8s-diff-port-600433 kubelet[937]: E0725 19:00:44.336172     937 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:00:44 default-k8s-diff-port-600433 kubelet[937]: E0725 19:00:44.336255     937 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:00:44 default-k8s-diff-port-600433 kubelet[937]: E0725 19:00:44.336755     937 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5xqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-5js8s_kube-system(1c72ac7a-9a56-4056-80bf-398eeab90b94): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 25 19:00:44 default-k8s-diff-port-600433 kubelet[937]: E0725 19:00:44.336838     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:00:59 default-k8s-diff-port-600433 kubelet[937]: E0725 19:00:59.320964     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:01:10 default-k8s-diff-port-600433 kubelet[937]: E0725 19:01:10.319542     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:01:25 default-k8s-diff-port-600433 kubelet[937]: E0725 19:01:25.319601     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:01:40 default-k8s-diff-port-600433 kubelet[937]: E0725 19:01:40.319180     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:01:43 default-k8s-diff-port-600433 kubelet[937]: E0725 19:01:43.342901     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:01:43 default-k8s-diff-port-600433 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:01:43 default-k8s-diff-port-600433 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:01:43 default-k8s-diff-port-600433 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:01:43 default-k8s-diff-port-600433 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:01:55 default-k8s-diff-port-600433 kubelet[937]: E0725 19:01:55.321086     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:02:10 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:10.319209     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:02:22 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:22.319180     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:02:34 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:34.319458     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:02:43 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:43.345072     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:02:43 default-k8s-diff-port-600433 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:02:43 default-k8s-diff-port-600433 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:02:43 default-k8s-diff-port-600433 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:02:43 default-k8s-diff-port-600433 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:02:45 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:45.321471     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:02:58 default-k8s-diff-port-600433 kubelet[937]: E0725 19:02:58.319055     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:03:12 default-k8s-diff-port-600433 kubelet[937]: E0725 19:03:12.319581     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	
	
	==> storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] <==
	I0725 18:49:49.858088       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:50:19.862633       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] <==
	I0725 18:50:20.672751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:50:20.686440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:50:20.686655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:50:20.700596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:50:20.700866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a!
	I0725 18:50:20.701171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3281dd58-1ba3-4e8d-af3f-db67d793b109", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a became leader
	I0725 18:50:20.801716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5js8s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s: exit status 1 (65.318367ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5js8s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:03:53.71725943 +0000 UTC m=+5707.797991286
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-646344 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-646344 logs -n 25: (2.021628942s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.244734571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934235244710437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33fadf66-31ef-4662-8d08-dc467c250ca1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.245391506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7748ea8-2738-4ee2-ab11-79154edc5f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.245519781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7748ea8-2738-4ee2-ab11-79154edc5f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.245710522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7748ea8-2738-4ee2-ab11-79154edc5f4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.282081935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44808e75-57c7-4269-a052-20165eb3983f name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.282151834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44808e75-57c7-4269-a052-20165eb3983f name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.283491056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd37538b-1b09-4c1e-aa77-b3e968b645c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.287313121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934235287282943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd37538b-1b09-4c1e-aa77-b3e968b645c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.288511170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24efda00-b7eb-470f-a68f-1b2c5e5b5d88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.288560190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24efda00-b7eb-470f-a68f-1b2c5e5b5d88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.288832646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24efda00-b7eb-470f-a68f-1b2c5e5b5d88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.324401957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe021fea-3f16-4379-b847-88fee46b3ecd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.324548237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe021fea-3f16-4379-b847-88fee46b3ecd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.325618426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65e1ca28-2bcc-49c3-b53c-f88f29cf2546 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.326028302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934235326001460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65e1ca28-2bcc-49c3-b53c-f88f29cf2546 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.326620751Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65920029-129f-4486-a90b-3dfe9b6db39a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.326676711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65920029-129f-4486-a90b-3dfe9b6db39a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.326877497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65920029-129f-4486-a90b-3dfe9b6db39a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.359818911Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b43c1d7-847b-41a3-b7a4-346a1c214ddb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.359889077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b43c1d7-847b-41a3-b7a4-346a1c214ddb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.361127970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=510d9c76-6937-4fb8-b3f2-7910120fa990 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.361688403Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934235361661087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=510d9c76-6937-4fb8-b3f2-7910120fa990 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.362261883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebc1db43-d9bf-4088-8a65-0295d6c79174 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.362309130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebc1db43-d9bf-4088-8a65-0295d6c79174 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:03:55 embed-certs-646344 crio[719]: time="2024-07-25 19:03:55.362562978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebc1db43-d9bf-4088-8a65-0295d6c79174 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd45387197a71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   faf79030adb2e       storage-provisioner
	f13736cd3f522       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7e0fd69172ec7       busybox
	e265ce86dc50d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   4e428ebdbbe18       coredns-7db6d8ff4d-89vvx
	3396bd8e6a955       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   015513a6da5f8       kube-proxy-xk2lq
	e75aba803f380       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   faf79030adb2e       storage-provisioner
	980f1cafbf9df       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   bcbae76557f03       kube-scheduler-embed-certs-646344
	a057db9df5d79       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   ebb38f4fbb2b0       kube-controller-manager-embed-certs-646344
	c4e8d2e70adcf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   252d2fc7d92b9       etcd-embed-certs-646344
	e29758ae5e857       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   38d2459c44679       kube-apiserver-embed-certs-646344
	
	
	==> coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35265 - 23403 "HINFO IN 8076321064129470149.2907509352587689521. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010791408s
	
	
	==> describe nodes <==
	Name:               embed-certs-646344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-646344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=embed-certs-646344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:44:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-646344
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:03:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:01:09 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:01:09 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:01:09 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:01:09 +0000   Thu, 25 Jul 2024 18:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.133
	  Hostname:    embed-certs-646344
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e5df9354e56484d8dfebe496d944239
	  System UUID:                8e5df935-4e56-484d-8dfe-be496d944239
	  Boot ID:                    f262b540-66c0-40e8-9836-cc83f8c1974f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-89vvx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-embed-certs-646344                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-embed-certs-646344             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-embed-certs-646344    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-xk2lq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-embed-certs-646344             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 metrics-server-569cc877fc-4gcts               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node embed-certs-646344 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-646344 event: Registered Node embed-certs-646344 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-646344 event: Registered Node embed-certs-646344 in Controller
	
	
	==> dmesg <==
	[Jul25 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056648] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891438] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.449935] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.051698] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.063508] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058787] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.201858] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.110571] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.269119] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.212361] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +2.409705] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +0.063408] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.521268] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.445639] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +3.280783] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.342513] kauditd_printk_skb: 35 callbacks suppressed
	[ +19.825225] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] <==
	{"level":"info","ts":"2024-07-25T18:50:23.771404Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c068ec6aed99fd16","local-member-id":"3a498a03d9c7f67","added-peer-id":"3a498a03d9c7f67","added-peer-peer-urls":["https://192.168.61.133:2380"]}
	{"level":"info","ts":"2024-07-25T18:50:23.77265Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c068ec6aed99fd16","local-member-id":"3a498a03d9c7f67","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:50:23.772694Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:50:23.772383Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:50:23.773697Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3a498a03d9c7f67","initial-advertise-peer-urls":["https://192.168.61.133:2380"],"listen-peer-urls":["https://192.168.61.133:2380"],"advertise-client-urls":["https://192.168.61.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:50:23.77374Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:50:23.772402Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.133:2380"}
	{"level":"info","ts":"2024-07-25T18:50:23.773788Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.133:2380"}
	{"level":"info","ts":"2024-07-25T18:50:25.139627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:25.139735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:25.139793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 received MsgPreVoteResp from 3a498a03d9c7f67 at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:25.139832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:25.139866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 received MsgVoteResp from 3a498a03d9c7f67 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:25.139893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3a498a03d9c7f67 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:25.139924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3a498a03d9c7f67 elected leader 3a498a03d9c7f67 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:25.152892Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3a498a03d9c7f67","local-member-attributes":"{Name:embed-certs-646344 ClientURLs:[https://192.168.61.133:2379]}","request-path":"/0/members/3a498a03d9c7f67/attributes","cluster-id":"c068ec6aed99fd16","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:50:25.152924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:25.153269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:25.153306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:25.152951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:25.155823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:50:25.15642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.133:2379"}
	{"level":"info","ts":"2024-07-25T19:00:25.179801Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-07-25T19:00:25.189173Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":829,"took":"8.824944ms","hash":1388300861,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-25T19:00:25.189259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1388300861,"revision":829,"compact-revision":-1}
	
	
	==> kernel <==
	 19:03:55 up 13 min,  0 users,  load average: 0.20, 0.09, 0.08
	Linux embed-certs-646344 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] <==
	I0725 18:58:27.599845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:00:26.600140       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:00:26.600495       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0725 19:00:27.601524       1 handler_proxy.go:93] no RequestInfo found in the context
	W0725 19:00:27.601524       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:00:27.601727       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:00:27.601751       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0725 19:00:27.601691       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:00:27.602933       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:01:27.602681       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:01:27.602911       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:01:27.602941       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:01:27.603066       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:01:27.603155       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:01:27.604873       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:03:27.603873       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:03:27.603969       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:03:27.603979       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:03:27.605180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:03:27.605256       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:03:27.605275       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] <==
	I0725 18:58:10.602347       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:58:40.150528       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:58:40.611990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:10.155754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:59:10.619233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:40.161620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 18:59:40.627208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:10.166991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:00:10.635974       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:40.172798       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:00:40.644002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:01:10.180316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:01:10.651423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:01:40.185852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:01:40.659205       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:01:48.191279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="308.494µs"
	I0725 19:02:00.194341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="145.705µs"
	E0725 19:02:10.190082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:02:10.666849       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:02:40.194263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:02:40.674763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:03:10.200379       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:03:10.683191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:03:40.204997       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:03:40.691990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] <==
	I0725 18:50:27.803400       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:50:27.811844       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.133"]
	I0725 18:50:27.843093       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:50:27.843136       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:50:27.843151       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:50:27.845240       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:50:27.845573       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:50:27.845597       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:27.846781       1 config.go:192] "Starting service config controller"
	I0725 18:50:27.846809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:50:27.846834       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:50:27.846837       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:50:27.847286       1 config.go:319] "Starting node config controller"
	I0725 18:50:27.847312       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:50:27.947499       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:50:27.947539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:50:27.947598       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] <==
	I0725 18:50:24.243355       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:50:26.500954       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:50:26.501111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:50:26.501145       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:50:26.501215       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:50:26.619761       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:50:26.619790       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:26.621785       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:50:26.621908       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:50:26.625823       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:50:26.625893       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:50:26.722885       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:01:23 embed-certs-646344 kubelet[932]: E0725 19:01:23.175133     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:01:34 embed-certs-646344 kubelet[932]: E0725 19:01:34.191002     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:01:34 embed-certs-646344 kubelet[932]: E0725 19:01:34.191488     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:01:34 embed-certs-646344 kubelet[932]: E0725 19:01:34.192523     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmsv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-4gcts_kube-system(688239e2-95b8-4344-b3e5-5199f9504a19): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 25 19:01:34 embed-certs-646344 kubelet[932]: E0725 19:01:34.192806     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:01:48 embed-certs-646344 kubelet[932]: E0725 19:01:48.175696     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:02:00 embed-certs-646344 kubelet[932]: E0725 19:02:00.178853     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:02:12 embed-certs-646344 kubelet[932]: E0725 19:02:12.175923     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:02:22 embed-certs-646344 kubelet[932]: E0725 19:02:22.191907     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:02:22 embed-certs-646344 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:02:22 embed-certs-646344 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:02:22 embed-certs-646344 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:02:22 embed-certs-646344 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:02:25 embed-certs-646344 kubelet[932]: E0725 19:02:25.175187     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:02:40 embed-certs-646344 kubelet[932]: E0725 19:02:40.176113     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:02:51 embed-certs-646344 kubelet[932]: E0725 19:02:51.176707     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:03:05 embed-certs-646344 kubelet[932]: E0725 19:03:05.175568     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:03:18 embed-certs-646344 kubelet[932]: E0725 19:03:18.175461     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:03:22 embed-certs-646344 kubelet[932]: E0725 19:03:22.191655     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:03:22 embed-certs-646344 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:03:22 embed-certs-646344 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:03:22 embed-certs-646344 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:03:22 embed-certs-646344 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:03:33 embed-certs-646344 kubelet[932]: E0725 19:03:33.175705     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:03:47 embed-certs-646344 kubelet[932]: E0725 19:03:47.175685     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	
	
	==> storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] <==
	I0725 18:50:27.766950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:50:57.769816       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] <==
	I0725 18:50:58.453399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:50:58.464107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:50:58.464278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:50:58.483189       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:50:58.483413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5!
	I0725 18:50:58.485801       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3fd3587c-afb1-4221-a023-d925e899bfae", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5 became leader
	I0725 18:50:58.584350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-646344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4gcts
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts: exit status 1 (61.348561ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4gcts" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0725 18:55:35.103205   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 18:56:58.589893   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-371663 -n no-preload-371663
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:04:22.713481455 +0000 UTC m=+5736.794213299
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-371663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-371663 logs -n 25: (2.06328109s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.206787132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934264206763747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=121011dd-471d-43ae-8753-e1a1856e36bf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.207328274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb1508a1-e1f2-45f8-9d19-efe579651da6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.207418532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb1508a1-e1f2-45f8-9d19-efe579651da6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.207644038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb1508a1-e1f2-45f8-9d19-efe579651da6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.248199035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d08645f-3680-4146-ac65-dfd0cd80822b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.248297829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d08645f-3680-4146-ac65-dfd0cd80822b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.249714040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75c2b5dd-5446-4c36-b0e2-ceb5dd91caa4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.250308257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934264250283210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75c2b5dd-5446-4c36-b0e2-ceb5dd91caa4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.250888183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5413e30c-5f1a-4bbf-a8ab-54544455bad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.250975754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5413e30c-5f1a-4bbf-a8ab-54544455bad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.251180470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5413e30c-5f1a-4bbf-a8ab-54544455bad7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.289073528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1215266f-b5df-414f-98a4-2cf0294215cc name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.289146293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1215266f-b5df-414f-98a4-2cf0294215cc name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.290324520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=422f7f91-679a-4ebd-a399-6b357e43cf5c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.290754648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934264290721477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=422f7f91-679a-4ebd-a399-6b357e43cf5c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.291372860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d3880cf-cbed-4f7e-b074-2e2b3b6d82da name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.291427040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d3880cf-cbed-4f7e-b074-2e2b3b6d82da name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.291687700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d3880cf-cbed-4f7e-b074-2e2b3b6d82da name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.325204183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de23e79f-a95e-4792-a0ca-ac5d085b2e50 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.325279641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de23e79f-a95e-4792-a0ca-ac5d085b2e50 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.326898782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c224b229-d7dd-4d28-9856-e65ba8ad9221 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.327358839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934264327334575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c224b229-d7dd-4d28-9856-e65ba8ad9221 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.328002977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dfbe651-9a2e-4d3a-81e9-1e3327b32269 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.328074702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dfbe651-9a2e-4d3a-81e9-1e3327b32269 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:04:24 no-preload-371663 crio[728]: time="2024-07-25 19:04:24.328361293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dfbe651-9a2e-4d3a-81e9-1e3327b32269 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dcdeb74e65467       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c3cb40b3caaf3       storage-provisioner
	ac6b4b69fb05b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   47feb5223f7c6       busybox
	143f91ca28541       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   dd569d9d1412e       coredns-5cfdc65f69-lq97z
	e99e6f0bcc37c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c3cb40b3caaf3       storage-provisioner
	6b9d65c951729       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   eee515d54dbb4       kube-proxy-bf9rt
	86a55c3ce8aca       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   b553599d71041       kube-apiserver-no-preload-371663
	f55693d23f976       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   bbc06dc8cc834       kube-controller-manager-no-preload-371663
	e8502ebc3bc8f       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   617f7785e5fb0       kube-scheduler-no-preload-371663
	5b4489bee34a4       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   a6178b5d39581       etcd-no-preload-371663
	
	
	==> coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58506 - 48972 "HINFO IN 3742996015109382260.5669205391674193530. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009742591s
	
	
	==> describe nodes <==
	Name:               no-preload-371663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-371663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=no-preload-371663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_41_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:40:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-371663
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:04:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:01:37 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:01:37 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:01:37 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:01:37 +0000   Thu, 25 Jul 2024 18:51:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.62
	  Hostname:    no-preload-371663
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f820fa13c6425fac15cb7471f7543e
	  System UUID:                78f820fa-13c6-425f-ac15-cb7471f7543e
	  Boot ID:                    cfafaa54-5894-431e-8aa7-1cae14472e72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5cfdc65f69-lq97z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-371663                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-371663             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-371663    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-bf9rt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-371663             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-78fcd8795b-zthnk              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-371663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-371663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-371663 status is now: NodeHasSufficientPID
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-371663 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-371663 event: Registered Node no-preload-371663 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-371663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-371663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-371663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-371663 event: Registered Node no-preload-371663 in Controller
	
	
	==> dmesg <==
	[Jul25 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051273] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.933748] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.045778] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.554814] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.682576] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.056397] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077045] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.158331] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.158946] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.256781] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +14.688749] systemd-fstab-generator[1177]: Ignoring "noauto" option for root device
	[  +0.058043] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835100] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +4.578737] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.424965] systemd-fstab-generator[1924]: Ignoring "noauto" option for root device
	[Jul25 18:51] kauditd_printk_skb: 61 callbacks suppressed
	[ +24.194477] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] <==
	{"level":"info","ts":"2024-07-25T18:50:51.749386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:50:51.751141Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:50:51.751409Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.62:2380"}
	{"level":"info","ts":"2024-07-25T18:50:51.751557Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.62:2380"}
	{"level":"info","ts":"2024-07-25T18:50:51.751875Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7318547c71bbcda3","initial-advertise-peer-urls":["https://192.168.72.62:2380"],"listen-peer-urls":["https://192.168.72.62:2380"],"advertise-client-urls":["https://192.168.72.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:50:51.752786Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:50:53.109326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 received MsgPreVoteResp from 7318547c71bbcda3 at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 received MsgVoteResp from 7318547c71bbcda3 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7318547c71bbcda3 elected leader 7318547c71bbcda3 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.113752Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7318547c71bbcda3","local-member-attributes":"{Name:no-preload-371663 ClientURLs:[https://192.168.72.62:2379]}","request-path":"/0/members/7318547c71bbcda3/attributes","cluster-id":"3beaf59f728f470","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:50:53.113917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:53.114164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:53.114192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:53.11433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:53.115174Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:50:53.115188Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:50:53.116032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:50:53.1162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.62:2379"}
	{"level":"info","ts":"2024-07-25T19:00:53.159642Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-07-25T19:00:53.169867Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.527413ms","hash":988434894,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2723840,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-25T19:00:53.170028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":988434894,"revision":852,"compact-revision":-1}
	
	
	==> kernel <==
	 19:04:24 up 14 min,  0 users,  load average: 0.06, 0.11, 0.09
	Linux no-preload-371663 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0725 19:00:55.306122       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:00:55.306432       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0725 19:00:55.307735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:00:55.307828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:01:55.307862       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:01:55.308114       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0725 19:01:55.307995       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:01:55.308231       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0725 19:01:55.309338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:01:55.309425       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:03:55.310022       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:03:55.310327       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0725 19:03:55.310071       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:03:55.310472       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0725 19:03:55.311651       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:03:55.311726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] <==
	E0725 18:58:58.672835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 18:58:58.741416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:28.680865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 18:59:28.749495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 18:59:58.687495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 18:59:58.757250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:28.693549       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:00:28.765686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:00:58.701218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:00:58.774642       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:01:28.708194       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:01:28.781815       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:01:37.238296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-371663"
	E0725 19:01:58.714473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:01:58.789257       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:01:59.719073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="296.496µs"
	I0725 19:02:11.717857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="91.81µs"
	E0725 19:02:28.720640       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:02:28.797502       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:02:58.727478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:02:58.805859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:03:28.733339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:03:28.814065       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:03:58.739706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:03:58.821833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0725 18:50:55.236060       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0725 18:50:55.246714       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.62"]
	E0725 18:50:55.246796       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0725 18:50:55.308982       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0725 18:50:55.309071       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:50:55.309122       1 server_linux.go:170] "Using iptables Proxier"
	I0725 18:50:55.316532       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0725 18:50:55.316775       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0725 18:50:55.317041       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:55.329967       1 config.go:197] "Starting service config controller"
	I0725 18:50:55.330003       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:50:55.330037       1 config.go:104] "Starting endpoint slice config controller"
	I0725 18:50:55.330041       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:50:55.343812       1 config.go:326] "Starting node config controller"
	I0725 18:50:55.343838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:50:55.430207       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:50:55.430363       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:50:55.444023       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] <==
	I0725 18:50:52.465444       1 serving.go:386] Generated self-signed cert in-memory
	W0725 18:50:54.288443       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:50:54.288532       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:50:54.288571       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:50:54.288594       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:50:54.352726       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0725 18:50:54.355297       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:54.360416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:50:54.361082       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0725 18:50:54.361018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:50:54.365175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:50:54.465498       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:01:50 no-preload-371663 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:01:50 no-preload-371663 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:01:50 no-preload-371663 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:01:50 no-preload-371663 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:01:59 no-preload-371663 kubelet[1301]: E0725 19:01:59.703583    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:02:11 no-preload-371663 kubelet[1301]: E0725 19:02:11.704068    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:02:26 no-preload-371663 kubelet[1301]: E0725 19:02:26.708208    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:02:38 no-preload-371663 kubelet[1301]: E0725 19:02:38.703968    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:02:50 no-preload-371663 kubelet[1301]: E0725 19:02:50.719639    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:02:50 no-preload-371663 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:02:50 no-preload-371663 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:02:50 no-preload-371663 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:02:50 no-preload-371663 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:02:51 no-preload-371663 kubelet[1301]: E0725 19:02:51.704372    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:03:06 no-preload-371663 kubelet[1301]: E0725 19:03:06.704208    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:03:21 no-preload-371663 kubelet[1301]: E0725 19:03:21.704021    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:03:36 no-preload-371663 kubelet[1301]: E0725 19:03:36.705182    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:03:47 no-preload-371663 kubelet[1301]: E0725 19:03:47.704101    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:03:50 no-preload-371663 kubelet[1301]: E0725 19:03:50.717543    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:03:50 no-preload-371663 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:03:50 no-preload-371663 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:03:50 no-preload-371663 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:03:50 no-preload-371663 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:04:00 no-preload-371663 kubelet[1301]: E0725 19:04:00.703822    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:04:11 no-preload-371663 kubelet[1301]: E0725 19:04:11.703487    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	
	
	==> storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] <==
	I0725 18:51:25.960864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:51:25.970065       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:51:25.970248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:51:43.368816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:51:43.370109       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898!
	I0725 18:51:43.370416       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b09119ed-dae1-444e-8fd0-359a6539513b", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898 became leader
	I0725 18:51:43.470517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898!
	
	
	==> storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] <==
	I0725 18:50:55.161420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:51:25.164472       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-371663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-zthnk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk: exit status 1 (72.329361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-zthnk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
E0725 18:59:12.056096   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
E0725 19:01:58.590410   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
E0725 19:04:12.056242   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
E0725 19:05:01.641587   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
E0725 19:06:58.590310   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (234.95746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-108542" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (219.59945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25: (1.620659302s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.211561440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934437211539920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf47b9d8-3f72-4cf2-b9cf-ed9841372ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.212211675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dac306b-11de-40cc-a25e-14c556d22b65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.212265371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dac306b-11de-40cc-a25e-14c556d22b65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.212306144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1dac306b-11de-40cc-a25e-14c556d22b65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.243932826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=facd9070-87e5-4f25-8c33-07745284d14e name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.244027616Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=facd9070-87e5-4f25-8c33-07745284d14e name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.245348918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c92d8320-c4ae-4b83-8901-b8adcfcb6322 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.245751303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934437245726701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c92d8320-c4ae-4b83-8901-b8adcfcb6322 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.246535231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fda72ed-4659-49f4-a0a9-a85870ed319c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.246605967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fda72ed-4659-49f4-a0a9-a85870ed319c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.246637819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0fda72ed-4659-49f4-a0a9-a85870ed319c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.295150354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=043dfa3b-5feb-42c1-b72a-4ab42479cfdb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.295266808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=043dfa3b-5feb-42c1-b72a-4ab42479cfdb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.296641927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=342091d4-dc56-4ff5-9629-d5e227b1c553 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.297021180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934437297000765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=342091d4-dc56-4ff5-9629-d5e227b1c553 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.297490752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4b81182-2251-454a-a08c-a6a599a1871f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.297540446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4b81182-2251-454a-a08c-a6a599a1871f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.297571220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c4b81182-2251-454a-a08c-a6a599a1871f name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.331067030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37647c8a-e1be-4e3c-9973-da67e5ce9a5b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.331180688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37647c8a-e1be-4e3c-9973-da67e5ce9a5b name=/runtime.v1.RuntimeService/Version
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.332429493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03e87670-c93f-4121-b62f-ac2a6856142e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.332846986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934437332817807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03e87670-c93f-4121-b62f-ac2a6856142e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.333547509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f08c453b-0749-4df8-9921-6b27c192a692 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.333601182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f08c453b-0749-4df8-9921-6b27c192a692 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:07:17 old-k8s-version-108542 crio[648]: time="2024-07-25 19:07:17.333635792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f08c453b-0749-4df8-9921-6b27c192a692 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055343] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037717] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863537] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.917310] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.440772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.925882] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062742] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.199961] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129009] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312354] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul25 18:50] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.085718] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.193987] kauditd_printk_skb: 46 callbacks suppressed
	[Jul25 18:54] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Jul25 18:56] systemd-fstab-generator[5371]: Ignoring "noauto" option for root device
	[  +0.066840] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:07:17 up 17 min,  0 users,  load average: 0.08, 0.11, 0.08
	Linux old-k8s-version-108542 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a592c0, 0xc00099d9c0)
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: goroutine 156 [chan receive]:
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a61b00)
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: goroutine 157 [select]:
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bdfef0, 0x4f0ac20, 0xc0006ff4a0, 0x1, 0xc00009e0c0)
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00025c380, 0xc00009e0c0)
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a592f0, 0xc00099da80)
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 25 19:07:17 old-k8s-version-108542 kubelet[6554]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 25 19:07:17 old-k8s-version-108542 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 25 19:07:17 old-k8s-version-108542 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (221.79214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-108542" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:11:06.696032268 +0000 UTC m=+6140.776764126
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-600433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.945µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-600433 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-600433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-600433 logs -n 25: (1.276167038s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC | 25 Jul 24 19:09 UTC |
	| start   | -p auto-889508 --memory=3072                           | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC | 25 Jul 24 19:10 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-371663                                   | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC | 25 Jul 24 19:09 UTC |
	| start   | -p kindnet-889508                                      | kindnet-889508               | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC | 25 Jul 24 19:10 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 pgrep -a                                | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:10 UTC | 25 Jul 24 19:10 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-889508 pgrep -a                             | kindnet-889508               | jenkins | v1.33.1 | 25 Jul 24 19:10 UTC | 25 Jul 24 19:10 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 sudo cat                                | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 sudo cat                                | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 sudo cat                                | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/resolv.conf                                       |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 sudo crictl                             | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | pods                                                   |                              |         |         |                     |                     |
	| ssh     | -p auto-889508 sudo crictl ps                          | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | --all                                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 19:09:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 19:09:37.923826   67035 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:09:37.924120   67035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:09:37.924131   67035 out.go:304] Setting ErrFile to fd 2...
	I0725 19:09:37.924136   67035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:09:37.924392   67035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 19:09:37.925177   67035 out.go:298] Setting JSON to false
	I0725 19:09:37.926324   67035 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6722,"bootTime":1721927856,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 19:09:37.926391   67035 start.go:139] virtualization: kvm guest
	I0725 19:09:37.928614   67035 out.go:177] * [kindnet-889508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 19:09:37.930014   67035 notify.go:220] Checking for updates...
	I0725 19:09:37.930033   67035 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:09:37.931392   67035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:09:37.932771   67035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:09:37.934103   67035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:37.935315   67035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 19:09:37.936488   67035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:09:37.938167   67035 config.go:182] Loaded profile config "auto-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:09:37.938310   67035 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:09:37.938445   67035 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:09:37.938569   67035 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:09:37.978417   67035 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 19:09:37.979709   67035 start.go:297] selected driver: kvm2
	I0725 19:09:37.979725   67035 start.go:901] validating driver "kvm2" against <nil>
	I0725 19:09:37.979736   67035 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:09:37.980426   67035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:09:37.980507   67035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 19:09:37.996442   67035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 19:09:37.996513   67035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 19:09:37.996803   67035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:09:37.996841   67035 cni.go:84] Creating CNI manager for "kindnet"
	I0725 19:09:37.996849   67035 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 19:09:37.996935   67035 start.go:340] cluster config:
	{Name:kindnet-889508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:09:37.997081   67035 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:09:37.998823   67035 out.go:177] * Starting "kindnet-889508" primary control-plane node in "kindnet-889508" cluster
	I0725 19:09:37.999888   67035 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:09:37.999923   67035 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 19:09:37.999933   67035 cache.go:56] Caching tarball of preloaded images
	I0725 19:09:38.000021   67035 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 19:09:38.000036   67035 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 19:09:38.000150   67035 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/config.json ...
	I0725 19:09:38.000175   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/config.json: {Name:mk5e0ccdb7b0c2944973df6305537a57ec44e3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:38.000370   67035 start.go:360] acquireMachinesLock for kindnet-889508: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 19:09:38.000417   67035 start.go:364] duration metric: took 26.127µs to acquireMachinesLock for "kindnet-889508"
	I0725 19:09:38.000436   67035 start.go:93] Provisioning new machine with config: &{Name:kindnet-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:09:38.000500   67035 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 19:09:37.445816   66554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:09:37.581527   66554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:09:37.596758   66554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:09:37.616063   66554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 19:09:37.616119   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.626220   66554 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 19:09:37.626297   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.636715   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.649517   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.660347   66554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:09:37.671455   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.681867   66554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.698161   66554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:09:37.708425   66554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:09:37.717484   66554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 19:09:37.717536   66554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 19:09:37.730012   66554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:09:37.738687   66554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:09:37.867933   66554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 19:09:38.013773   66554 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 19:09:38.013844   66554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 19:09:38.018553   66554 start.go:563] Will wait 60s for crictl version
	I0725 19:09:38.018604   66554 ssh_runner.go:195] Run: which crictl
	I0725 19:09:38.022960   66554 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:09:38.070442   66554 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 19:09:38.070515   66554 ssh_runner.go:195] Run: crio --version
	I0725 19:09:38.101154   66554 ssh_runner.go:195] Run: crio --version
	I0725 19:09:38.131258   66554 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 19:09:38.132406   66554 main.go:141] libmachine: (auto-889508) Calling .GetIP
	I0725 19:09:38.135540   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:38.135992   66554 main.go:141] libmachine: (auto-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8a:40", ip: ""} in network mk-auto-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:09:26 +0000 UTC Type:0 Mac:52:54:00:b3:8a:40 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:auto-889508 Clientid:01:52:54:00:b3:8a:40}
	I0725 19:09:38.136024   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined IP address 192.168.39.77 and MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:38.136213   66554 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 19:09:38.140293   66554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:09:38.152477   66554 kubeadm.go:883] updating cluster {Name:auto-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:09:38.152582   66554 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:09:38.152626   66554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:09:38.185748   66554 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 19:09:38.185823   66554 ssh_runner.go:195] Run: which lz4
	I0725 19:09:38.189537   66554 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 19:09:38.193574   66554 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 19:09:38.193607   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 19:09:39.493621   66554 crio.go:462] duration metric: took 1.304119044s to copy over tarball
	I0725 19:09:39.493681   66554 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 19:09:41.988120   66554 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.494412989s)
	I0725 19:09:41.988149   66554 crio.go:469] duration metric: took 2.494502198s to extract the tarball
	I0725 19:09:41.988158   66554 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 19:09:42.044434   66554 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:09:42.094808   66554 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 19:09:42.094833   66554 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:09:42.094843   66554 kubeadm.go:934] updating node { 192.168.39.77 8443 v1.30.3 crio true true} ...
	I0725 19:09:42.094957   66554 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-889508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 19:09:42.095038   66554 ssh_runner.go:195] Run: crio config
	I0725 19:09:42.149444   66554 cni.go:84] Creating CNI manager for ""
	I0725 19:09:42.149467   66554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 19:09:42.149478   66554 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:09:42.149505   66554 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-889508 NodeName:auto-889508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:09:42.149664   66554 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-889508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:09:42.149740   66554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:09:42.160850   66554 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:09:42.160930   66554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:09:42.170091   66554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0725 19:09:42.187334   66554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:09:42.204034   66554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0725 19:09:42.227501   66554 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I0725 19:09:42.233450   66554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.77	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:09:42.253205   66554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:09:42.391320   66554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:09:42.411873   66554 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508 for IP: 192.168.39.77
	I0725 19:09:42.411895   66554 certs.go:194] generating shared ca certs ...
	I0725 19:09:42.411910   66554 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:42.412079   66554 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 19:09:42.412132   66554 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 19:09:42.412143   66554 certs.go:256] generating profile certs ...
	I0725 19:09:42.412211   66554 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.key
	I0725 19:09:42.412237   66554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.crt with IP's: []
	I0725 19:09:38.002096   67035 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 19:09:38.002239   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:09:38.002273   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:09:38.017963   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I0725 19:09:38.018426   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:09:38.019058   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:09:38.019084   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:09:38.019436   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:09:38.019660   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetMachineName
	I0725 19:09:38.019812   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:09:38.019963   67035 start.go:159] libmachine.API.Create for "kindnet-889508" (driver="kvm2")
	I0725 19:09:38.019993   67035 client.go:168] LocalClient.Create starting
	I0725 19:09:38.020026   67035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 19:09:38.020064   67035 main.go:141] libmachine: Decoding PEM data...
	I0725 19:09:38.020092   67035 main.go:141] libmachine: Parsing certificate...
	I0725 19:09:38.020172   67035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 19:09:38.020206   67035 main.go:141] libmachine: Decoding PEM data...
	I0725 19:09:38.020220   67035 main.go:141] libmachine: Parsing certificate...
	I0725 19:09:38.020255   67035 main.go:141] libmachine: Running pre-create checks...
	I0725 19:09:38.020280   67035 main.go:141] libmachine: (kindnet-889508) Calling .PreCreateCheck
	I0725 19:09:38.020758   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetConfigRaw
	I0725 19:09:38.021236   67035 main.go:141] libmachine: Creating machine...
	I0725 19:09:38.021255   67035 main.go:141] libmachine: (kindnet-889508) Calling .Create
	I0725 19:09:38.021386   67035 main.go:141] libmachine: (kindnet-889508) Creating KVM machine...
	I0725 19:09:38.022916   67035 main.go:141] libmachine: (kindnet-889508) DBG | found existing default KVM network
	I0725 19:09:38.024752   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.024543   67057 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:d5:92} reservation:<nil>}
	I0725 19:09:38.025744   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.025659   67057 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:ad:5e} reservation:<nil>}
	I0725 19:09:38.026746   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.026680   67057 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:c9:4f} reservation:<nil>}
	I0725 19:09:38.028217   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.028126   67057 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000356510}
	I0725 19:09:38.028281   67035 main.go:141] libmachine: (kindnet-889508) DBG | created network xml: 
	I0725 19:09:38.028302   67035 main.go:141] libmachine: (kindnet-889508) DBG | <network>
	I0725 19:09:38.028317   67035 main.go:141] libmachine: (kindnet-889508) DBG |   <name>mk-kindnet-889508</name>
	I0725 19:09:38.028357   67035 main.go:141] libmachine: (kindnet-889508) DBG |   <dns enable='no'/>
	I0725 19:09:38.028366   67035 main.go:141] libmachine: (kindnet-889508) DBG |   
	I0725 19:09:38.028376   67035 main.go:141] libmachine: (kindnet-889508) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0725 19:09:38.028385   67035 main.go:141] libmachine: (kindnet-889508) DBG |     <dhcp>
	I0725 19:09:38.028394   67035 main.go:141] libmachine: (kindnet-889508) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0725 19:09:38.028406   67035 main.go:141] libmachine: (kindnet-889508) DBG |     </dhcp>
	I0725 19:09:38.028417   67035 main.go:141] libmachine: (kindnet-889508) DBG |   </ip>
	I0725 19:09:38.028426   67035 main.go:141] libmachine: (kindnet-889508) DBG |   
	I0725 19:09:38.028435   67035 main.go:141] libmachine: (kindnet-889508) DBG | </network>
	I0725 19:09:38.028445   67035 main.go:141] libmachine: (kindnet-889508) DBG | 
	I0725 19:09:38.033553   67035 main.go:141] libmachine: (kindnet-889508) DBG | trying to create private KVM network mk-kindnet-889508 192.168.72.0/24...
	I0725 19:09:38.110470   67035 main.go:141] libmachine: (kindnet-889508) DBG | private KVM network mk-kindnet-889508 192.168.72.0/24 created
	I0725 19:09:38.110662   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.110538   67057 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:38.110736   67035 main.go:141] libmachine: (kindnet-889508) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508 ...
	I0725 19:09:38.110803   67035 main.go:141] libmachine: (kindnet-889508) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 19:09:38.110835   67035 main.go:141] libmachine: (kindnet-889508) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 19:09:38.396421   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.396189   67057 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa...
	I0725 19:09:38.613133   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.612981   67057 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/kindnet-889508.rawdisk...
	I0725 19:09:38.613178   67035 main.go:141] libmachine: (kindnet-889508) DBG | Writing magic tar header
	I0725 19:09:38.613195   67035 main.go:141] libmachine: (kindnet-889508) DBG | Writing SSH key tar header
	I0725 19:09:38.613208   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:38.613152   67057 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508 ...
	I0725 19:09:38.613390   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508
	I0725 19:09:38.613459   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508 (perms=drwx------)
	I0725 19:09:38.613481   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 19:09:38.613498   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 19:09:38.613510   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 19:09:38.613521   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:38.613536   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 19:09:38.613545   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 19:09:38.613561   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 19:09:38.613571   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home/jenkins
	I0725 19:09:38.613582   67035 main.go:141] libmachine: (kindnet-889508) DBG | Checking permissions on dir: /home
	I0725 19:09:38.613592   67035 main.go:141] libmachine: (kindnet-889508) DBG | Skipping /home - not owner
	I0725 19:09:38.613607   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 19:09:38.613618   67035 main.go:141] libmachine: (kindnet-889508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 19:09:38.613631   67035 main.go:141] libmachine: (kindnet-889508) Creating domain...
	I0725 19:09:38.614933   67035 main.go:141] libmachine: (kindnet-889508) define libvirt domain using xml: 
	I0725 19:09:38.614966   67035 main.go:141] libmachine: (kindnet-889508) <domain type='kvm'>
	I0725 19:09:38.614978   67035 main.go:141] libmachine: (kindnet-889508)   <name>kindnet-889508</name>
	I0725 19:09:38.615014   67035 main.go:141] libmachine: (kindnet-889508)   <memory unit='MiB'>3072</memory>
	I0725 19:09:38.615028   67035 main.go:141] libmachine: (kindnet-889508)   <vcpu>2</vcpu>
	I0725 19:09:38.615040   67035 main.go:141] libmachine: (kindnet-889508)   <features>
	I0725 19:09:38.615061   67035 main.go:141] libmachine: (kindnet-889508)     <acpi/>
	I0725 19:09:38.615085   67035 main.go:141] libmachine: (kindnet-889508)     <apic/>
	I0725 19:09:38.615098   67035 main.go:141] libmachine: (kindnet-889508)     <pae/>
	I0725 19:09:38.615107   67035 main.go:141] libmachine: (kindnet-889508)     
	I0725 19:09:38.615119   67035 main.go:141] libmachine: (kindnet-889508)   </features>
	I0725 19:09:38.615132   67035 main.go:141] libmachine: (kindnet-889508)   <cpu mode='host-passthrough'>
	I0725 19:09:38.615142   67035 main.go:141] libmachine: (kindnet-889508)   
	I0725 19:09:38.615151   67035 main.go:141] libmachine: (kindnet-889508)   </cpu>
	I0725 19:09:38.615165   67035 main.go:141] libmachine: (kindnet-889508)   <os>
	I0725 19:09:38.615553   67035 main.go:141] libmachine: (kindnet-889508)     <type>hvm</type>
	I0725 19:09:38.615577   67035 main.go:141] libmachine: (kindnet-889508)     <boot dev='cdrom'/>
	I0725 19:09:38.615587   67035 main.go:141] libmachine: (kindnet-889508)     <boot dev='hd'/>
	I0725 19:09:38.615597   67035 main.go:141] libmachine: (kindnet-889508)     <bootmenu enable='no'/>
	I0725 19:09:38.615611   67035 main.go:141] libmachine: (kindnet-889508)   </os>
	I0725 19:09:38.615627   67035 main.go:141] libmachine: (kindnet-889508)   <devices>
	I0725 19:09:38.615642   67035 main.go:141] libmachine: (kindnet-889508)     <disk type='file' device='cdrom'>
	I0725 19:09:38.615658   67035 main.go:141] libmachine: (kindnet-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/boot2docker.iso'/>
	I0725 19:09:38.615671   67035 main.go:141] libmachine: (kindnet-889508)       <target dev='hdc' bus='scsi'/>
	I0725 19:09:38.615685   67035 main.go:141] libmachine: (kindnet-889508)       <readonly/>
	I0725 19:09:38.615710   67035 main.go:141] libmachine: (kindnet-889508)     </disk>
	I0725 19:09:38.615735   67035 main.go:141] libmachine: (kindnet-889508)     <disk type='file' device='disk'>
	I0725 19:09:38.615750   67035 main.go:141] libmachine: (kindnet-889508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 19:09:38.615766   67035 main.go:141] libmachine: (kindnet-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/kindnet-889508.rawdisk'/>
	I0725 19:09:38.615775   67035 main.go:141] libmachine: (kindnet-889508)       <target dev='hda' bus='virtio'/>
	I0725 19:09:38.615785   67035 main.go:141] libmachine: (kindnet-889508)     </disk>
	I0725 19:09:38.615794   67035 main.go:141] libmachine: (kindnet-889508)     <interface type='network'>
	I0725 19:09:38.615805   67035 main.go:141] libmachine: (kindnet-889508)       <source network='mk-kindnet-889508'/>
	I0725 19:09:38.615814   67035 main.go:141] libmachine: (kindnet-889508)       <model type='virtio'/>
	I0725 19:09:38.615824   67035 main.go:141] libmachine: (kindnet-889508)     </interface>
	I0725 19:09:38.615834   67035 main.go:141] libmachine: (kindnet-889508)     <interface type='network'>
	I0725 19:09:38.615859   67035 main.go:141] libmachine: (kindnet-889508)       <source network='default'/>
	I0725 19:09:38.615870   67035 main.go:141] libmachine: (kindnet-889508)       <model type='virtio'/>
	I0725 19:09:38.615879   67035 main.go:141] libmachine: (kindnet-889508)     </interface>
	I0725 19:09:38.615888   67035 main.go:141] libmachine: (kindnet-889508)     <serial type='pty'>
	I0725 19:09:38.615898   67035 main.go:141] libmachine: (kindnet-889508)       <target port='0'/>
	I0725 19:09:38.615906   67035 main.go:141] libmachine: (kindnet-889508)     </serial>
	I0725 19:09:38.615924   67035 main.go:141] libmachine: (kindnet-889508)     <console type='pty'>
	I0725 19:09:38.615937   67035 main.go:141] libmachine: (kindnet-889508)       <target type='serial' port='0'/>
	I0725 19:09:38.615947   67035 main.go:141] libmachine: (kindnet-889508)     </console>
	I0725 19:09:38.615956   67035 main.go:141] libmachine: (kindnet-889508)     <rng model='virtio'>
	I0725 19:09:38.615978   67035 main.go:141] libmachine: (kindnet-889508)       <backend model='random'>/dev/random</backend>
	I0725 19:09:38.615989   67035 main.go:141] libmachine: (kindnet-889508)     </rng>
	I0725 19:09:38.615997   67035 main.go:141] libmachine: (kindnet-889508)     
	I0725 19:09:38.616011   67035 main.go:141] libmachine: (kindnet-889508)     
	I0725 19:09:38.616018   67035 main.go:141] libmachine: (kindnet-889508)   </devices>
	I0725 19:09:38.616027   67035 main.go:141] libmachine: (kindnet-889508) </domain>
	I0725 19:09:38.616037   67035 main.go:141] libmachine: (kindnet-889508) 
	I0725 19:09:38.620562   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:cc:1e:61 in network default
	I0725 19:09:38.621323   67035 main.go:141] libmachine: (kindnet-889508) Ensuring networks are active...
	I0725 19:09:38.621338   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:38.622294   67035 main.go:141] libmachine: (kindnet-889508) Ensuring network default is active
	I0725 19:09:38.622714   67035 main.go:141] libmachine: (kindnet-889508) Ensuring network mk-kindnet-889508 is active
	I0725 19:09:38.623497   67035 main.go:141] libmachine: (kindnet-889508) Getting domain xml...
	I0725 19:09:38.624436   67035 main.go:141] libmachine: (kindnet-889508) Creating domain...
	I0725 19:09:40.131120   67035 main.go:141] libmachine: (kindnet-889508) Waiting to get IP...
	I0725 19:09:40.132125   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:40.132630   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:40.132678   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:40.132621   67057 retry.go:31] will retry after 259.237175ms: waiting for machine to come up
	I0725 19:09:40.393174   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:40.393781   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:40.393808   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:40.393731   67057 retry.go:31] will retry after 248.61524ms: waiting for machine to come up
	I0725 19:09:40.644176   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:40.644801   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:40.644830   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:40.644755   67057 retry.go:31] will retry after 325.248407ms: waiting for machine to come up
	I0725 19:09:40.972186   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:40.972905   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:40.972935   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:40.972805   67057 retry.go:31] will retry after 452.164838ms: waiting for machine to come up
	I0725 19:09:41.426310   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:41.426838   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:41.426867   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:41.426802   67057 retry.go:31] will retry after 540.099253ms: waiting for machine to come up
	I0725 19:09:41.968652   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:41.969201   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:41.969234   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:41.969148   67057 retry.go:31] will retry after 576.181066ms: waiting for machine to come up
	I0725 19:09:42.546836   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:42.547322   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:42.547361   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:42.547257   67057 retry.go:31] will retry after 912.224662ms: waiting for machine to come up
	I0725 19:09:42.790697   66554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.crt ...
	I0725 19:09:42.790735   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.crt: {Name:mkd99ea2e27f1d0e5907b530a6771bd3c147fc49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:42.790888   66554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.key ...
	I0725 19:09:42.790900   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/client.key: {Name:mk0d16d2cf03bd76996f320aa62983325f71b3d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:42.790975   66554 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key.7000de6e
	I0725 19:09:42.790991   66554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt.7000de6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.77]
	I0725 19:09:42.848884   66554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt.7000de6e ...
	I0725 19:09:42.848911   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt.7000de6e: {Name:mkf316fc9abb5cad4b9d3c24cbd7a2b53d0d930e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:42.849066   66554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key.7000de6e ...
	I0725 19:09:42.849084   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key.7000de6e: {Name:mk35ece56cc6386499fbd71c34fcc66358c6442b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:42.849151   66554 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt.7000de6e -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt
	I0725 19:09:42.849236   66554 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key.7000de6e -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key
	I0725 19:09:42.849291   66554 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.key
	I0725 19:09:42.849305   66554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.crt with IP's: []
	I0725 19:09:43.032501   66554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.crt ...
	I0725 19:09:43.032531   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.crt: {Name:mk80f4c460661d2f3ad7f07c1e5810b4493782c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:43.032700   66554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.key ...
	I0725 19:09:43.032711   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.key: {Name:mka55c46189b4f25aca5cd15d45134dc05efab06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:43.032909   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 19:09:43.032948   66554 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 19:09:43.032957   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 19:09:43.032980   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 19:09:43.033002   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:09:43.033023   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 19:09:43.033061   66554 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:09:43.033663   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:09:43.063551   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 19:09:43.093466   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:09:43.126957   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:09:43.152763   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0725 19:09:43.176481   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 19:09:43.198867   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:09:43.221536   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 19:09:43.244085   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 19:09:43.268129   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:09:43.293020   66554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 19:09:43.319475   66554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:09:43.337733   66554 ssh_runner.go:195] Run: openssl version
	I0725 19:09:43.343206   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 19:09:43.357215   66554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 19:09:43.362957   66554 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 19:09:43.363024   66554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 19:09:43.370662   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:09:43.383331   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:09:43.394683   66554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:09:43.399137   66554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:09:43.399197   66554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:09:43.404718   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:09:43.414955   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 19:09:43.428979   66554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 19:09:43.434692   66554 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 19:09:43.434758   66554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 19:09:43.441952   66554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 19:09:43.451943   66554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:09:43.455824   66554 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:09:43.455907   66554 kubeadm.go:392] StartCluster: {Name:auto-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:09:43.455981   66554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 19:09:43.456034   66554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:09:43.492014   66554 cri.go:89] found id: ""
	I0725 19:09:43.492081   66554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:09:43.502802   66554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:09:43.513104   66554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:09:43.525870   66554 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:09:43.525889   66554 kubeadm.go:157] found existing configuration files:
	
	I0725 19:09:43.525936   66554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:09:43.535246   66554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:09:43.535313   66554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:09:43.545603   66554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:09:43.556078   66554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:09:43.556137   66554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:09:43.566982   66554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:09:43.576267   66554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:09:43.576351   66554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:09:43.585657   66554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:09:43.594608   66554 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:09:43.594668   66554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:09:43.603409   66554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 19:09:43.806004   66554 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:09:43.460762   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:43.461368   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:43.461397   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:43.461326   67057 retry.go:31] will retry after 1.096173847s: waiting for machine to come up
	I0725 19:09:44.559689   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:44.560125   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:44.560153   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:44.560069   67057 retry.go:31] will retry after 1.125480035s: waiting for machine to come up
	I0725 19:09:45.686917   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:45.687516   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:45.687559   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:45.687462   67057 retry.go:31] will retry after 1.583372775s: waiting for machine to come up
	I0725 19:09:47.273138   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:47.273602   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:47.273626   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:47.273559   67057 retry.go:31] will retry after 1.99348525s: waiting for machine to come up
	I0725 19:09:49.269161   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:49.269703   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:49.269725   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:49.269659   67057 retry.go:31] will retry after 3.183380704s: waiting for machine to come up
	I0725 19:09:52.454840   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:52.455312   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:52.455339   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:52.455287   67057 retry.go:31] will retry after 3.969613455s: waiting for machine to come up
	I0725 19:09:53.433132   66554 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:09:53.433228   66554 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:09:53.433336   66554 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:09:53.433467   66554 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:09:53.433548   66554 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:09:53.433611   66554 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:09:53.435262   66554 out.go:204]   - Generating certificates and keys ...
	I0725 19:09:53.435331   66554 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:09:53.435401   66554 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:09:53.435492   66554 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:09:53.435554   66554 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:09:53.435607   66554 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:09:53.435685   66554 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:09:53.435748   66554 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:09:53.435843   66554 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-889508 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0725 19:09:53.435891   66554 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:09:53.435996   66554 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-889508 localhost] and IPs [192.168.39.77 127.0.0.1 ::1]
	I0725 19:09:53.436056   66554 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:09:53.436109   66554 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:09:53.436147   66554 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:09:53.436195   66554 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:09:53.436244   66554 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:09:53.436298   66554 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:09:53.436395   66554 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:09:53.436480   66554 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:09:53.436564   66554 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:09:53.436702   66554 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:09:53.436799   66554 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:09:53.438031   66554 out.go:204]   - Booting up control plane ...
	I0725 19:09:53.438136   66554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:09:53.438219   66554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:09:53.438341   66554 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:09:53.438483   66554 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:09:53.438598   66554 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:09:53.438670   66554 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:09:53.438777   66554 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:09:53.438847   66554 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:09:53.438896   66554 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950725ms
	I0725 19:09:53.438969   66554 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:09:53.439038   66554 kubeadm.go:310] [api-check] The API server is healthy after 5.0018723s
	I0725 19:09:53.439142   66554 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:09:53.439284   66554 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:09:53.439364   66554 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:09:53.439600   66554 kubeadm.go:310] [mark-control-plane] Marking the node auto-889508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:09:53.439689   66554 kubeadm.go:310] [bootstrap-token] Using token: 3v9234.2rlup6m27fvqzcf6
	I0725 19:09:53.441060   66554 out.go:204]   - Configuring RBAC rules ...
	I0725 19:09:53.441156   66554 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:09:53.441235   66554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:09:53.441353   66554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:09:53.441523   66554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:09:53.441633   66554 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:09:53.441709   66554 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:09:53.441810   66554 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:09:53.441848   66554 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:09:53.441890   66554 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:09:53.441896   66554 kubeadm.go:310] 
	I0725 19:09:53.441944   66554 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:09:53.441953   66554 kubeadm.go:310] 
	I0725 19:09:53.442034   66554 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:09:53.442040   66554 kubeadm.go:310] 
	I0725 19:09:53.442069   66554 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:09:53.442141   66554 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:09:53.442222   66554 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:09:53.442246   66554 kubeadm.go:310] 
	I0725 19:09:53.442296   66554 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:09:53.442303   66554 kubeadm.go:310] 
	I0725 19:09:53.442342   66554 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:09:53.442349   66554 kubeadm.go:310] 
	I0725 19:09:53.442407   66554 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:09:53.442469   66554 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:09:53.442527   66554 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:09:53.442534   66554 kubeadm.go:310] 
	I0725 19:09:53.442611   66554 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:09:53.442712   66554 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:09:53.442726   66554 kubeadm.go:310] 
	I0725 19:09:53.442850   66554 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3v9234.2rlup6m27fvqzcf6 \
	I0725 19:09:53.442996   66554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 19:09:53.443039   66554 kubeadm.go:310] 	--control-plane 
	I0725 19:09:53.443048   66554 kubeadm.go:310] 
	I0725 19:09:53.443160   66554 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:09:53.443167   66554 kubeadm.go:310] 
	I0725 19:09:53.443273   66554 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3v9234.2rlup6m27fvqzcf6 \
	I0725 19:09:53.443418   66554 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 19:09:53.443430   66554 cni.go:84] Creating CNI manager for ""
	I0725 19:09:53.443439   66554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 19:09:53.444805   66554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 19:09:53.446016   66554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 19:09:53.460993   66554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 19:09:53.479237   66554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:09:53.479308   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:53.479315   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-889508 minikube.k8s.io/updated_at=2024_07_25T19_09_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=auto-889508 minikube.k8s.io/primary=true
	I0725 19:09:53.508900   66554 ops.go:34] apiserver oom_adj: -16
	I0725 19:09:53.618290   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:54.119339   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:54.618462   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:55.119367   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:55.618354   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:56.118896   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:56.618589   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:57.118439   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:56.426559   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:09:56.427120   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find current IP address of domain kindnet-889508 in network mk-kindnet-889508
	I0725 19:09:56.427146   67035 main.go:141] libmachine: (kindnet-889508) DBG | I0725 19:09:56.427071   67057 retry.go:31] will retry after 4.774840298s: waiting for machine to come up
	I0725 19:09:57.618905   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:58.119232   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:58.619328   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:59.119060   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:09:59.619097   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:00.118866   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:00.618968   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:01.118565   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:01.618944   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:02.118367   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:01.205400   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.205875   67035 main.go:141] libmachine: (kindnet-889508) Found IP for machine: 192.168.72.127
	I0725 19:10:01.205897   67035 main.go:141] libmachine: (kindnet-889508) Reserving static IP address...
	I0725 19:10:01.205910   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has current primary IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.206365   67035 main.go:141] libmachine: (kindnet-889508) DBG | unable to find host DHCP lease matching {name: "kindnet-889508", mac: "52:54:00:03:c7:53", ip: "192.168.72.127"} in network mk-kindnet-889508
	I0725 19:10:01.284186   67035 main.go:141] libmachine: (kindnet-889508) DBG | Getting to WaitForSSH function...
	I0725 19:10:01.284216   67035 main.go:141] libmachine: (kindnet-889508) Reserved static IP address: 192.168.72.127
	I0725 19:10:01.284230   67035 main.go:141] libmachine: (kindnet-889508) Waiting for SSH to be available...
	I0725 19:10:01.287605   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.288183   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.288210   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.288243   67035 main.go:141] libmachine: (kindnet-889508) DBG | Using SSH client type: external
	I0725 19:10:01.288274   67035 main.go:141] libmachine: (kindnet-889508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa (-rw-------)
	I0725 19:10:01.288308   67035 main.go:141] libmachine: (kindnet-889508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 19:10:01.288355   67035 main.go:141] libmachine: (kindnet-889508) DBG | About to run SSH command:
	I0725 19:10:01.288372   67035 main.go:141] libmachine: (kindnet-889508) DBG | exit 0
	I0725 19:10:01.420539   67035 main.go:141] libmachine: (kindnet-889508) DBG | SSH cmd err, output: <nil>: 
	I0725 19:10:01.420829   67035 main.go:141] libmachine: (kindnet-889508) KVM machine creation complete!
	I0725 19:10:01.421153   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetConfigRaw
	I0725 19:10:01.421722   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:01.421930   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:01.422128   67035 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 19:10:01.422143   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetState
	I0725 19:10:01.423567   67035 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 19:10:01.423593   67035 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 19:10:01.423601   67035 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 19:10:01.423617   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:01.426063   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.426454   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.426484   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.426687   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:01.426881   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.427048   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.427189   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:01.427346   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:01.427539   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:01.427551   67035 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 19:10:01.543514   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:10:01.543537   67035 main.go:141] libmachine: Detecting the provisioner...
	I0725 19:10:01.543560   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:01.546454   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.546845   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.546873   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.547032   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:01.547241   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.547452   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.547613   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:01.547788   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:01.547971   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:01.547983   67035 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 19:10:01.661986   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 19:10:01.662079   67035 main.go:141] libmachine: found compatible host: buildroot
	I0725 19:10:01.662091   67035 main.go:141] libmachine: Provisioning with buildroot...
	I0725 19:10:01.662104   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetMachineName
	I0725 19:10:01.662366   67035 buildroot.go:166] provisioning hostname "kindnet-889508"
	I0725 19:10:01.662396   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetMachineName
	I0725 19:10:01.662597   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:01.665517   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.666190   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.666219   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.666370   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:01.666564   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.666715   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.666841   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:01.667009   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:01.667213   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:01.667227   67035 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-889508 && echo "kindnet-889508" | sudo tee /etc/hostname
	I0725 19:10:01.790400   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-889508
	
	I0725 19:10:01.790428   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:01.793504   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.793913   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.793950   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.794104   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:01.794277   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.794450   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:01.794621   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:01.794843   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:01.795009   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:01.795026   67035 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-889508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-889508/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-889508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:10:01.913065   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:10:01.913100   67035 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 19:10:01.913147   67035 buildroot.go:174] setting up certificates
	I0725 19:10:01.913159   67035 provision.go:84] configureAuth start
	I0725 19:10:01.913175   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetMachineName
	I0725 19:10:01.913495   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetIP
	I0725 19:10:01.916969   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.917366   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.917396   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.917545   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:01.920106   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.920524   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:01.920558   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:01.920733   67035 provision.go:143] copyHostCerts
	I0725 19:10:01.920789   67035 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 19:10:01.920809   67035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 19:10:01.920891   67035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 19:10:01.921031   67035 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 19:10:01.921041   67035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 19:10:01.921068   67035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 19:10:01.921147   67035 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 19:10:01.921159   67035 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 19:10:01.921177   67035 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 19:10:01.921240   67035 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.kindnet-889508 san=[127.0.0.1 192.168.72.127 kindnet-889508 localhost minikube]
	I0725 19:10:02.060249   67035 provision.go:177] copyRemoteCerts
	I0725 19:10:02.060307   67035 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:10:02.060354   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.063348   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.063905   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.063931   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.064164   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.064400   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.064615   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.064771   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:02.155211   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 19:10:02.180931   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0725 19:10:02.206492   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 19:10:02.230230   67035 provision.go:87] duration metric: took 317.054529ms to configureAuth
	I0725 19:10:02.230260   67035 buildroot.go:189] setting minikube options for container-runtime
	I0725 19:10:02.230458   67035 config.go:182] Loaded profile config "kindnet-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:10:02.230560   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.233515   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.233911   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.233931   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.234074   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.234296   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.234522   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.234733   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.234932   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:02.235082   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:02.235095   67035 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 19:10:02.499865   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 19:10:02.499908   67035 main.go:141] libmachine: Checking connection to Docker...
	I0725 19:10:02.499920   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetURL
	I0725 19:10:02.501389   67035 main.go:141] libmachine: (kindnet-889508) DBG | Using libvirt version 6000000
	I0725 19:10:02.504315   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.504768   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.504800   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.504992   67035 main.go:141] libmachine: Docker is up and running!
	I0725 19:10:02.505010   67035 main.go:141] libmachine: Reticulating splines...
	I0725 19:10:02.505018   67035 client.go:171] duration metric: took 24.485014233s to LocalClient.Create
	I0725 19:10:02.505044   67035 start.go:167] duration metric: took 24.485082411s to libmachine.API.Create "kindnet-889508"
	I0725 19:10:02.505057   67035 start.go:293] postStartSetup for "kindnet-889508" (driver="kvm2")
	I0725 19:10:02.505071   67035 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:10:02.505096   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:02.505373   67035 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:10:02.505399   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.508076   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.508472   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.508502   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.508596   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.508775   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.508925   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.509084   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:02.603202   67035 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:10:02.607184   67035 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 19:10:02.607214   67035 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 19:10:02.607282   67035 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 19:10:02.607382   67035 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 19:10:02.607485   67035 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:10:02.616859   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:10:02.646135   67035 start.go:296] duration metric: took 141.063689ms for postStartSetup
	I0725 19:10:02.646185   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetConfigRaw
	I0725 19:10:02.646821   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetIP
	I0725 19:10:02.650126   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.650635   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.650657   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.650886   67035 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/config.json ...
	I0725 19:10:02.651108   67035 start.go:128] duration metric: took 24.650598112s to createHost
	I0725 19:10:02.651132   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.653682   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.654076   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.654103   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.654211   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.654417   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.654579   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.654765   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.654995   67035 main.go:141] libmachine: Using SSH client type: native
	I0725 19:10:02.655214   67035 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.127 22 <nil> <nil>}
	I0725 19:10:02.655232   67035 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 19:10:02.769234   67035 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721934602.745925035
	
	I0725 19:10:02.769255   67035 fix.go:216] guest clock: 1721934602.745925035
	I0725 19:10:02.769264   67035 fix.go:229] Guest: 2024-07-25 19:10:02.745925035 +0000 UTC Remote: 2024-07-25 19:10:02.651121995 +0000 UTC m=+24.764874109 (delta=94.80304ms)
	I0725 19:10:02.769324   67035 fix.go:200] guest clock delta is within tolerance: 94.80304ms
	I0725 19:10:02.769332   67035 start.go:83] releasing machines lock for "kindnet-889508", held for 24.768905939s
	I0725 19:10:02.769366   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:02.769644   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetIP
	I0725 19:10:02.772879   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.773488   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.773512   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.773766   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:02.774267   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:02.774464   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:02.774565   67035 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:10:02.774617   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.774709   67035 ssh_runner.go:195] Run: cat /version.json
	I0725 19:10:02.774735   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:02.777727   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.778013   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.778164   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.778189   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.778365   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.778521   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:02.778524   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.778543   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:02.778683   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.778807   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:02.778868   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:02.778948   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:02.779062   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:02.779188   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:02.902994   67035 ssh_runner.go:195] Run: systemctl --version
	I0725 19:10:02.909234   67035 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 19:10:03.067134   67035 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 19:10:03.073220   67035 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 19:10:03.073296   67035 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:10:03.090095   67035 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 19:10:03.090117   67035 start.go:495] detecting cgroup driver to use...
	I0725 19:10:03.090193   67035 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 19:10:03.105393   67035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 19:10:03.118520   67035 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:10:03.118595   67035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:10:03.137790   67035 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:10:03.156167   67035 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:10:03.280122   67035 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:10:03.453699   67035 docker.go:233] disabling docker service ...
	I0725 19:10:03.453774   67035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:10:03.470032   67035 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:10:03.483551   67035 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:10:03.602486   67035 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:10:03.735802   67035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:10:03.752089   67035 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:10:03.772857   67035 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 19:10:03.772922   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.783491   67035 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 19:10:03.783560   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.795011   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.807424   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.818651   67035 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:10:03.830307   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.840632   67035 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.857498   67035 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:10:03.869977   67035 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:10:03.880239   67035 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 19:10:03.880306   67035 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 19:10:03.893975   67035 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:10:03.903985   67035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:10:04.020203   67035 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 19:10:04.160443   67035 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 19:10:04.160537   67035 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 19:10:04.165390   67035 start.go:563] Will wait 60s for crictl version
	I0725 19:10:04.165453   67035 ssh_runner.go:195] Run: which crictl
	I0725 19:10:04.169603   67035 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:10:04.217007   67035 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 19:10:04.217107   67035 ssh_runner.go:195] Run: crio --version
	I0725 19:10:04.249293   67035 ssh_runner.go:195] Run: crio --version
	I0725 19:10:04.279758   67035 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 19:10:02.618513   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:03.118553   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:03.619069   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:04.118599   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:04.618719   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:05.119133   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:05.619343   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:06.119274   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:06.619090   66554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:06.755041   66554 kubeadm.go:1113] duration metric: took 13.275804261s to wait for elevateKubeSystemPrivileges
	I0725 19:10:06.755075   66554 kubeadm.go:394] duration metric: took 23.299172809s to StartCluster
	I0725 19:10:06.755098   66554 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:06.755203   66554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:10:06.757050   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:06.757295   66554 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:10:06.757303   66554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:10:06.757400   66554 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:10:06.757496   66554 addons.go:69] Setting storage-provisioner=true in profile "auto-889508"
	I0725 19:10:06.757547   66554 addons.go:234] Setting addon storage-provisioner=true in "auto-889508"
	I0725 19:10:06.757597   66554 host.go:66] Checking if "auto-889508" exists ...
	I0725 19:10:06.757618   66554 addons.go:69] Setting default-storageclass=true in profile "auto-889508"
	I0725 19:10:06.757467   66554 config.go:182] Loaded profile config "auto-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:10:06.757655   66554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-889508"
	I0725 19:10:06.758031   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:06.758097   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:06.758038   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:06.758164   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:06.759142   66554 out.go:177] * Verifying Kubernetes components...
	I0725 19:10:06.760657   66554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:10:06.778909   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0725 19:10:06.779374   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:06.779537   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I0725 19:10:06.779894   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:10:06.779918   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:06.780009   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:06.780426   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:06.780546   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:10:06.780567   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:06.780630   66554 main.go:141] libmachine: (auto-889508) Calling .GetState
	I0725 19:10:06.780896   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:06.781447   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:06.781475   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:06.784978   66554 addons.go:234] Setting addon default-storageclass=true in "auto-889508"
	I0725 19:10:06.785020   66554 host.go:66] Checking if "auto-889508" exists ...
	I0725 19:10:06.785378   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:06.785406   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:06.801628   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35925
	I0725 19:10:06.802242   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:06.802838   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:10:06.802860   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:06.803213   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:06.803411   66554 main.go:141] libmachine: (auto-889508) Calling .GetState
	I0725 19:10:06.804922   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0725 19:10:06.805559   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:06.805559   66554 main.go:141] libmachine: (auto-889508) Calling .DriverName
	I0725 19:10:06.806058   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:10:06.806081   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:06.806378   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:06.806832   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:06.806871   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:06.807300   66554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:10:06.808649   66554 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:10:06.808667   66554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:10:06.808691   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHHostname
	I0725 19:10:06.812316   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:10:06.812818   66554 main.go:141] libmachine: (auto-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8a:40", ip: ""} in network mk-auto-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:09:26 +0000 UTC Type:0 Mac:52:54:00:b3:8a:40 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:auto-889508 Clientid:01:52:54:00:b3:8a:40}
	I0725 19:10:06.812833   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined IP address 192.168.39.77 and MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:10:06.813095   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHPort
	I0725 19:10:06.813301   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHKeyPath
	I0725 19:10:06.813475   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHUsername
	I0725 19:10:06.813739   66554 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/id_rsa Username:docker}
	I0725 19:10:06.824284   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0725 19:10:06.824791   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:06.825319   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:10:06.825342   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:06.825685   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:06.825862   66554 main.go:141] libmachine: (auto-889508) Calling .GetState
	I0725 19:10:06.827425   66554 main.go:141] libmachine: (auto-889508) Calling .DriverName
	I0725 19:10:06.827634   66554 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:10:06.827658   66554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:10:06.827680   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHHostname
	I0725 19:10:06.830397   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:10:06.830869   66554 main.go:141] libmachine: (auto-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8a:40", ip: ""} in network mk-auto-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:09:26 +0000 UTC Type:0 Mac:52:54:00:b3:8a:40 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:auto-889508 Clientid:01:52:54:00:b3:8a:40}
	I0725 19:10:06.830892   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined IP address 192.168.39.77 and MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:10:06.831151   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHPort
	I0725 19:10:06.831376   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHKeyPath
	I0725 19:10:06.831527   66554 main.go:141] libmachine: (auto-889508) Calling .GetSSHUsername
	I0725 19:10:06.831683   66554 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/id_rsa Username:docker}
	I0725 19:10:06.984310   66554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:10:06.984401   66554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:10:07.076820   66554 node_ready.go:35] waiting up to 15m0s for node "auto-889508" to be "Ready" ...
	I0725 19:10:07.087223   66554 node_ready.go:49] node "auto-889508" has status "Ready":"True"
	I0725 19:10:07.087251   66554 node_ready.go:38] duration metric: took 10.398652ms for node "auto-889508" to be "Ready" ...
	I0725 19:10:07.087262   66554 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:10:07.095853   66554 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:07.129137   66554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:10:07.224658   66554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:10:04.281089   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetIP
	I0725 19:10:04.283784   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:04.284181   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:04.284235   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:04.284394   67035 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 19:10:04.288437   67035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:10:04.301689   67035 kubeadm.go:883] updating cluster {Name:kindnet-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:10:04.301791   67035 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:10:04.301832   67035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:10:04.334866   67035 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 19:10:04.334941   67035 ssh_runner.go:195] Run: which lz4
	I0725 19:10:04.339058   67035 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 19:10:04.342759   67035 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 19:10:04.342794   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 19:10:05.677231   67035 crio.go:462] duration metric: took 1.338209755s to copy over tarball
	I0725 19:10:05.677297   67035 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 19:10:07.786045   66554 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 19:10:08.295603   66554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-889508" context rescaled to 1 replicas
	I0725 19:10:08.316499   66554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091802041s)
	I0725 19:10:08.316536   66554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.187373669s)
	I0725 19:10:08.316550   66554 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:08.316562   66554 main.go:141] libmachine: (auto-889508) Calling .Close
	I0725 19:10:08.316564   66554 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:08.316573   66554 main.go:141] libmachine: (auto-889508) Calling .Close
	I0725 19:10:08.316986   66554 main.go:141] libmachine: (auto-889508) DBG | Closing plugin on server side
	I0725 19:10:08.317024   66554 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:08.317039   66554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:08.317048   66554 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:08.317056   66554 main.go:141] libmachine: (auto-889508) Calling .Close
	I0725 19:10:08.317158   66554 main.go:141] libmachine: (auto-889508) DBG | Closing plugin on server side
	I0725 19:10:08.317224   66554 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:08.317240   66554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:08.317250   66554 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:08.317259   66554 main.go:141] libmachine: (auto-889508) Calling .Close
	I0725 19:10:08.318409   66554 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:08.318432   66554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:08.318433   66554 main.go:141] libmachine: (auto-889508) DBG | Closing plugin on server side
	I0725 19:10:08.318434   66554 main.go:141] libmachine: (auto-889508) DBG | Closing plugin on server side
	I0725 19:10:08.318471   66554 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:08.318481   66554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:08.327613   66554 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:08.327636   66554 main.go:141] libmachine: (auto-889508) Calling .Close
	I0725 19:10:08.327897   66554 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:08.327915   66554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:08.329818   66554 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 19:10:08.331067   66554 addons.go:510] duration metric: took 1.573665733s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0725 19:10:09.102357   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:11.308638   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:08.268674   67035 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.591339998s)
	I0725 19:10:08.268713   67035 crio.go:469] duration metric: took 2.591454639s to extract the tarball
	I0725 19:10:08.268723   67035 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 19:10:08.311998   67035 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:10:08.367459   67035 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 19:10:08.367487   67035 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:10:08.367497   67035 kubeadm.go:934] updating node { 192.168.72.127 8443 v1.30.3 crio true true} ...
	I0725 19:10:08.367643   67035 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-889508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0725 19:10:08.367732   67035 ssh_runner.go:195] Run: crio config
	I0725 19:10:08.420601   67035 cni.go:84] Creating CNI manager for "kindnet"
	I0725 19:10:08.420622   67035 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:10:08.420671   67035 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.127 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-889508 NodeName:kindnet-889508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:10:08.420850   67035 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-889508"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:10:08.420924   67035 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:10:08.430222   67035 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:10:08.430284   67035 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:10:08.439224   67035 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0725 19:10:08.455321   67035 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:10:08.472371   67035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0725 19:10:08.488614   67035 ssh_runner.go:195] Run: grep 192.168.72.127	control-plane.minikube.internal$ /etc/hosts
	I0725 19:10:08.492428   67035 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:10:08.505402   67035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:10:08.650240   67035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:10:08.669436   67035 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508 for IP: 192.168.72.127
	I0725 19:10:08.669462   67035 certs.go:194] generating shared ca certs ...
	I0725 19:10:08.669482   67035 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:08.669656   67035 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 19:10:08.669733   67035 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 19:10:08.669754   67035 certs.go:256] generating profile certs ...
	I0725 19:10:08.669834   67035 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.key
	I0725 19:10:08.669853   67035 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.crt with IP's: []
	I0725 19:10:08.933722   67035 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.crt ...
	I0725 19:10:08.933752   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.crt: {Name:mk293ad1867ca70df27f04fcd1ba116e591b2847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:08.933899   67035 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.key ...
	I0725 19:10:08.933909   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/client.key: {Name:mkdad75f4618a04e1767fff3c80fea7af3d41b8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:08.933986   67035 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key.83db3d86
	I0725 19:10:08.934000   67035 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt.83db3d86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.127]
	I0725 19:10:09.087403   67035 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt.83db3d86 ...
	I0725 19:10:09.087435   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt.83db3d86: {Name:mkf17526c8809fbb980a1718253bfa0a67a3ab39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:09.087622   67035 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key.83db3d86 ...
	I0725 19:10:09.087641   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key.83db3d86: {Name:mk48e4b5bc126d5deeafc8995e3b4b863bfcf765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:09.087740   67035 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt.83db3d86 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt
	I0725 19:10:09.087855   67035 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key.83db3d86 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key
	I0725 19:10:09.087935   67035 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.key
	I0725 19:10:09.087955   67035 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.crt with IP's: []
	I0725 19:10:09.353248   67035 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.crt ...
	I0725 19:10:09.353284   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.crt: {Name:mkdef795d18d083de437f457854687db6dcc7e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:09.353457   67035 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.key ...
	I0725 19:10:09.353468   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.key: {Name:mk6faf42cd79114c9843bfd7ac00463608c54700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:09.353628   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 19:10:09.353661   67035 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 19:10:09.353670   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 19:10:09.353690   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 19:10:09.353713   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:10:09.353733   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 19:10:09.353768   67035 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:10:09.354370   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:10:09.389219   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 19:10:09.425153   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:10:09.452851   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:10:09.477767   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 19:10:09.501971   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 19:10:09.526014   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:10:09.552263   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/kindnet-889508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 19:10:09.576802   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:10:09.601061   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 19:10:09.625702   67035 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 19:10:09.649706   67035 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:10:09.665768   67035 ssh_runner.go:195] Run: openssl version
	I0725 19:10:09.671301   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 19:10:09.681959   67035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 19:10:09.686333   67035 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 19:10:09.686389   67035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 19:10:09.692482   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 19:10:09.702969   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 19:10:09.713852   67035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 19:10:09.718473   67035 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 19:10:09.718536   67035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 19:10:09.724161   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:10:09.734662   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:10:09.745172   67035 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:10:09.749407   67035 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:10:09.749462   67035 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:10:09.756465   67035 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:10:09.766814   67035 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:10:09.771074   67035 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:10:09.771141   67035 kubeadm.go:392] StartCluster: {Name:kindnet-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:10:09.771207   67035 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 19:10:09.771250   67035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:10:09.809316   67035 cri.go:89] found id: ""
	I0725 19:10:09.809394   67035 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:10:09.819687   67035 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:10:09.829487   67035 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:10:09.839077   67035 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:10:09.839095   67035 kubeadm.go:157] found existing configuration files:
	
	I0725 19:10:09.839135   67035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:10:09.848426   67035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:10:09.848488   67035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:10:09.859658   67035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:10:09.871675   67035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:10:09.871742   67035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:10:09.883047   67035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:10:09.893056   67035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:10:09.893133   67035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:10:09.905179   67035 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:10:09.914907   67035 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:10:09.914985   67035 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:10:09.924846   67035 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 19:10:10.126675   67035 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:10:13.783071   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:16.101991   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:21.173122   67035 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:10:21.173173   67035 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:10:21.173237   67035 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:10:21.173393   67035 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:10:21.173512   67035 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:10:21.173622   67035 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:10:21.175158   67035 out.go:204]   - Generating certificates and keys ...
	I0725 19:10:21.175273   67035 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:10:21.175401   67035 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:10:21.175503   67035 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:10:21.175612   67035 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:10:21.175723   67035 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:10:21.175834   67035 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:10:21.175960   67035 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:10:21.176192   67035 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-889508 localhost] and IPs [192.168.72.127 127.0.0.1 ::1]
	I0725 19:10:21.176288   67035 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:10:21.176480   67035 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-889508 localhost] and IPs [192.168.72.127 127.0.0.1 ::1]
	I0725 19:10:21.176568   67035 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:10:21.176673   67035 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:10:21.176728   67035 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:10:21.176790   67035 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:10:21.176852   67035 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:10:21.176913   67035 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:10:21.176984   67035 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:10:21.177069   67035 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:10:21.177138   67035 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:10:21.177246   67035 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:10:21.177318   67035 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:10:21.178694   67035 out.go:204]   - Booting up control plane ...
	I0725 19:10:21.178767   67035 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:10:21.178832   67035 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:10:21.178915   67035 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:10:21.179025   67035 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:10:21.179144   67035 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:10:21.179207   67035 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:10:21.179315   67035 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:10:21.179423   67035 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:10:21.179494   67035 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 510.909973ms
	I0725 19:10:21.179563   67035 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:10:21.179627   67035 kubeadm.go:310] [api-check] The API server is healthy after 6.002785783s
	I0725 19:10:21.179721   67035 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:10:21.179859   67035 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:10:21.179906   67035 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:10:21.180110   67035 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-889508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:10:21.180163   67035 kubeadm.go:310] [bootstrap-token] Using token: 06znew.uuir5y8yqm03olnm
	I0725 19:10:21.181314   67035 out.go:204]   - Configuring RBAC rules ...
	I0725 19:10:21.181400   67035 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:10:21.181469   67035 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:10:21.181581   67035 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:10:21.181696   67035 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:10:21.181807   67035 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:10:21.181892   67035 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:10:21.181985   67035 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:10:21.182033   67035 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:10:21.182079   67035 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:10:21.182085   67035 kubeadm.go:310] 
	I0725 19:10:21.182129   67035 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:10:21.182136   67035 kubeadm.go:310] 
	I0725 19:10:21.182199   67035 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:10:21.182206   67035 kubeadm.go:310] 
	I0725 19:10:21.182270   67035 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:10:21.182339   67035 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:10:21.182384   67035 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:10:21.182391   67035 kubeadm.go:310] 
	I0725 19:10:21.182445   67035 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:10:21.182452   67035 kubeadm.go:310] 
	I0725 19:10:21.182512   67035 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:10:21.182521   67035 kubeadm.go:310] 
	I0725 19:10:21.182596   67035 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:10:21.182697   67035 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:10:21.182763   67035 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:10:21.182769   67035 kubeadm.go:310] 
	I0725 19:10:21.182848   67035 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:10:21.182934   67035 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:10:21.182943   67035 kubeadm.go:310] 
	I0725 19:10:21.183047   67035 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 06znew.uuir5y8yqm03olnm \
	I0725 19:10:21.183190   67035 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 19:10:21.183222   67035 kubeadm.go:310] 	--control-plane 
	I0725 19:10:21.183235   67035 kubeadm.go:310] 
	I0725 19:10:21.183355   67035 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:10:21.183364   67035 kubeadm.go:310] 
	I0725 19:10:21.183479   67035 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 06znew.uuir5y8yqm03olnm \
	I0725 19:10:21.183622   67035 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 19:10:21.183641   67035 cni.go:84] Creating CNI manager for "kindnet"
	I0725 19:10:21.184984   67035 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0725 19:10:18.102726   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:18.602141   66554 pod_ready.go:97] pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:18 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.77 HostIPs:[{IP:192.168.39.
77}] PodIP: PodIPs:[] StartTime:2024-07-25 19:10:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-25 19:10:08 +0000 UTC,FinishedAt:2024-07-25 19:10:18 +0000 UTC,ContainerID:cri-o://5e207fa06d9f2c216821d2baf9a51998a2e067547d2f906f2c9e1f92638d6ba3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5e207fa06d9f2c216821d2baf9a51998a2e067547d2f906f2c9e1f92638d6ba3 Started:0xc001f05be0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0725 19:10:18.602176   66554 pod_ready.go:81] duration metric: took 11.506288517s for pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace to be "Ready" ...
	E0725 19:10:18.602192   66554 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-dtql6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:18 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 19:10:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.77 HostIPs:[{IP:192.168.39.77}] PodIP: PodIPs:[] StartTime:2024-07-25 19:10:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-25 19:10:08 +0000 UTC,FinishedAt:2024-07-25 19:10:18 +0000 UTC,ContainerID:cri-o://5e207fa06d9f2c216821d2baf9a51998a2e067547d2f906f2c9e1f92638d6ba3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5e207fa06d9f2c216821d2baf9a51998a2e067547d2f906f2c9e1f92638d6ba3 Started:0xc001f05be0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0725 19:10:18.602201   66554 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:20.609374   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:21.186125   67035 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0725 19:10:21.191564   67035 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 19:10:21.191581   67035 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0725 19:10:21.209282   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 19:10:21.523668   67035 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:10:21.523773   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:21.523816   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-889508 minikube.k8s.io/updated_at=2024_07_25T19_10_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=kindnet-889508 minikube.k8s.io/primary=true
	I0725 19:10:21.567536   67035 ops.go:34] apiserver oom_adj: -16
	I0725 19:10:21.730412   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:22.231078   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:22.731078   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:23.107420   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:25.108598   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:27.109513   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:23.230457   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:23.731262   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:24.231199   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:24.731383   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:25.231533   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:25.731206   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:26.230940   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:26.730731   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:27.230697   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:27.730572   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:29.608882   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:32.108542   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:28.230906   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:28.730900   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:29.230641   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:29.730525   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:30.231469   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:30.730868   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:31.231009   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:31.730567   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:32.231545   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:32.730890   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:33.231156   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:33.730995   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:34.230701   67035 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:10:34.318758   67035 kubeadm.go:1113] duration metric: took 12.795043569s to wait for elevateKubeSystemPrivileges
	I0725 19:10:34.318794   67035 kubeadm.go:394] duration metric: took 24.54765885s to StartCluster
	I0725 19:10:34.318810   67035 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:34.318892   67035 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:10:34.320661   67035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:10:34.320951   67035 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:10:34.320984   67035 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:10:34.321050   67035 addons.go:69] Setting storage-provisioner=true in profile "kindnet-889508"
	I0725 19:10:34.321062   67035 addons.go:69] Setting default-storageclass=true in profile "kindnet-889508"
	I0725 19:10:34.321084   67035 addons.go:234] Setting addon storage-provisioner=true in "kindnet-889508"
	I0725 19:10:34.321084   67035 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-889508"
	I0725 19:10:34.321129   67035 config.go:182] Loaded profile config "kindnet-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:10:34.320963   67035 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:10:34.321131   67035 host.go:66] Checking if "kindnet-889508" exists ...
	I0725 19:10:34.321523   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:34.321560   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:34.321572   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:34.321581   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:34.322543   67035 out.go:177] * Verifying Kubernetes components...
	I0725 19:10:34.323616   67035 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:10:34.337416   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0725 19:10:34.337825   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0725 19:10:34.337881   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:34.338230   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:34.338430   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:10:34.338458   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:34.338726   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:10:34.338751   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:34.338860   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:34.339156   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:34.339312   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetState
	I0725 19:10:34.339463   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:34.339506   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:34.343526   67035 addons.go:234] Setting addon default-storageclass=true in "kindnet-889508"
	I0725 19:10:34.343568   67035 host.go:66] Checking if "kindnet-889508" exists ...
	I0725 19:10:34.343911   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:34.343956   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:34.355500   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
	I0725 19:10:34.355949   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:34.356510   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:10:34.356540   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:34.356923   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:34.357115   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetState
	I0725 19:10:34.359136   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:34.360224   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0725 19:10:34.360855   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:34.361342   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:10:34.361363   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:34.361513   67035 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:10:34.361761   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:34.362368   67035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:10:34.362421   67035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:10:34.362817   67035 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:10:34.362833   67035 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:10:34.362847   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:34.366066   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:34.366562   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:34.366588   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:34.366888   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:34.367082   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:34.367289   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:34.367453   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:34.380809   67035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0725 19:10:34.381369   67035 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:10:34.381861   67035 main.go:141] libmachine: Using API Version  1
	I0725 19:10:34.381886   67035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:10:34.382176   67035 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:10:34.382366   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetState
	I0725 19:10:34.384189   67035 main.go:141] libmachine: (kindnet-889508) Calling .DriverName
	I0725 19:10:34.384435   67035 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:10:34.384467   67035 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:10:34.384486   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHHostname
	I0725 19:10:34.387319   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:34.387657   67035 main.go:141] libmachine: (kindnet-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:c7:53", ip: ""} in network mk-kindnet-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:09:52 +0000 UTC Type:0 Mac:52:54:00:03:c7:53 Iaid: IPaddr:192.168.72.127 Prefix:24 Hostname:kindnet-889508 Clientid:01:52:54:00:03:c7:53}
	I0725 19:10:34.387677   67035 main.go:141] libmachine: (kindnet-889508) DBG | domain kindnet-889508 has defined IP address 192.168.72.127 and MAC address 52:54:00:03:c7:53 in network mk-kindnet-889508
	I0725 19:10:34.387886   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHPort
	I0725 19:10:34.388031   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHKeyPath
	I0725 19:10:34.388147   67035 main.go:141] libmachine: (kindnet-889508) Calling .GetSSHUsername
	I0725 19:10:34.388236   67035 sshutil.go:53] new ssh client: &{IP:192.168.72.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/kindnet-889508/id_rsa Username:docker}
	I0725 19:10:34.466624   67035 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:10:34.527109   67035 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:10:34.665392   67035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:10:34.679514   67035 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:10:34.758601   67035 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0725 19:10:34.760378   67035 node_ready.go:35] waiting up to 15m0s for node "kindnet-889508" to be "Ready" ...
	I0725 19:10:35.110522   67035 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:35.110559   67035 main.go:141] libmachine: (kindnet-889508) Calling .Close
	I0725 19:10:35.110597   67035 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:35.110621   67035 main.go:141] libmachine: (kindnet-889508) Calling .Close
	I0725 19:10:35.110910   67035 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:35.110956   67035 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:35.110986   67035 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:35.110997   67035 main.go:141] libmachine: (kindnet-889508) Calling .Close
	I0725 19:10:35.111085   67035 main.go:141] libmachine: (kindnet-889508) DBG | Closing plugin on server side
	I0725 19:10:35.111299   67035 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:35.111290   67035 main.go:141] libmachine: (kindnet-889508) DBG | Closing plugin on server side
	I0725 19:10:35.111317   67035 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:35.111395   67035 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:35.111414   67035 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:35.111430   67035 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:35.111441   67035 main.go:141] libmachine: (kindnet-889508) Calling .Close
	I0725 19:10:35.111706   67035 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:35.111722   67035 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:35.120905   67035 main.go:141] libmachine: Making call to close driver server
	I0725 19:10:35.120931   67035 main.go:141] libmachine: (kindnet-889508) Calling .Close
	I0725 19:10:35.121194   67035 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:10:35.121213   67035 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:10:35.122812   67035 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 19:10:34.109843   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:36.110003   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:35.124076   67035 addons.go:510] duration metric: took 803.089653ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0725 19:10:35.263455   67035 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-889508" context rescaled to 1 replicas
	I0725 19:10:36.763712   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:38.609164   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:41.109179   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:38.766146   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:41.263216   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:43.608442   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:45.609416   66554 pod_ready.go:102] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"False"
	I0725 19:10:47.109276   66554 pod_ready.go:92] pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.109304   66554 pod_ready.go:81] duration metric: took 28.507090276s for pod "coredns-7db6d8ff4d-ghtst" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.109314   66554 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.114128   66554 pod_ready.go:92] pod "etcd-auto-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.114149   66554 pod_ready.go:81] duration metric: took 4.825995ms for pod "etcd-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.114157   66554 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.118687   66554 pod_ready.go:92] pod "kube-apiserver-auto-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.118709   66554 pod_ready.go:81] duration metric: took 4.545801ms for pod "kube-apiserver-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.118721   66554 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.123999   66554 pod_ready.go:92] pod "kube-controller-manager-auto-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.124022   66554 pod_ready.go:81] duration metric: took 5.293184ms for pod "kube-controller-manager-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.124034   66554 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tjqvj" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.129909   66554 pod_ready.go:92] pod "kube-proxy-tjqvj" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.129928   66554 pod_ready.go:81] duration metric: took 5.888124ms for pod "kube-proxy-tjqvj" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.129937   66554 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:43.263871   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:45.264812   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:47.764746   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:47.507158   66554 pod_ready.go:92] pod "kube-scheduler-auto-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:47.507186   66554 pod_ready.go:81] duration metric: took 377.241783ms for pod "kube-scheduler-auto-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:47.507196   66554 pod_ready.go:38] duration metric: took 40.419922161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:10:47.507214   66554 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:10:47.507271   66554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:10:47.524073   66554 api_server.go:72] duration metric: took 40.766741916s to wait for apiserver process to appear ...
	I0725 19:10:47.524095   66554 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:10:47.524110   66554 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I0725 19:10:47.528978   66554 api_server.go:279] https://192.168.39.77:8443/healthz returned 200:
	ok
	I0725 19:10:47.530125   66554 api_server.go:141] control plane version: v1.30.3
	I0725 19:10:47.530148   66554 api_server.go:131] duration metric: took 6.047849ms to wait for apiserver health ...
	I0725 19:10:47.530157   66554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 19:10:47.709044   66554 system_pods.go:59] 7 kube-system pods found
	I0725 19:10:47.709075   66554 system_pods.go:61] "coredns-7db6d8ff4d-ghtst" [45263d39-086d-498a-8664-bbd5d1f05866] Running
	I0725 19:10:47.709080   66554 system_pods.go:61] "etcd-auto-889508" [bda0f045-b78f-4f27-9eda-a695630f4932] Running
	I0725 19:10:47.709084   66554 system_pods.go:61] "kube-apiserver-auto-889508" [f3143afd-a9bf-4593-bd8a-becd629093c6] Running
	I0725 19:10:47.709087   66554 system_pods.go:61] "kube-controller-manager-auto-889508" [6782f51a-8db5-497f-bb00-0f23e12bebd4] Running
	I0725 19:10:47.709090   66554 system_pods.go:61] "kube-proxy-tjqvj" [01858b42-4b67-4cd4-8bd1-08ef77c76951] Running
	I0725 19:10:47.709098   66554 system_pods.go:61] "kube-scheduler-auto-889508" [1bf07075-d1c0-453e-bfe7-174949077ffd] Running
	I0725 19:10:47.709101   66554 system_pods.go:61] "storage-provisioner" [365705a2-ad09-4861-9462-e206a939e92b] Running
	I0725 19:10:47.709106   66554 system_pods.go:74] duration metric: took 178.94449ms to wait for pod list to return data ...
	I0725 19:10:47.709113   66554 default_sa.go:34] waiting for default service account to be created ...
	I0725 19:10:47.906092   66554 default_sa.go:45] found service account: "default"
	I0725 19:10:47.906117   66554 default_sa.go:55] duration metric: took 196.998754ms for default service account to be created ...
	I0725 19:10:47.906127   66554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 19:10:48.108866   66554 system_pods.go:86] 7 kube-system pods found
	I0725 19:10:48.108893   66554 system_pods.go:89] "coredns-7db6d8ff4d-ghtst" [45263d39-086d-498a-8664-bbd5d1f05866] Running
	I0725 19:10:48.108898   66554 system_pods.go:89] "etcd-auto-889508" [bda0f045-b78f-4f27-9eda-a695630f4932] Running
	I0725 19:10:48.108908   66554 system_pods.go:89] "kube-apiserver-auto-889508" [f3143afd-a9bf-4593-bd8a-becd629093c6] Running
	I0725 19:10:48.108913   66554 system_pods.go:89] "kube-controller-manager-auto-889508" [6782f51a-8db5-497f-bb00-0f23e12bebd4] Running
	I0725 19:10:48.108917   66554 system_pods.go:89] "kube-proxy-tjqvj" [01858b42-4b67-4cd4-8bd1-08ef77c76951] Running
	I0725 19:10:48.108921   66554 system_pods.go:89] "kube-scheduler-auto-889508" [1bf07075-d1c0-453e-bfe7-174949077ffd] Running
	I0725 19:10:48.108924   66554 system_pods.go:89] "storage-provisioner" [365705a2-ad09-4861-9462-e206a939e92b] Running
	I0725 19:10:48.108929   66554 system_pods.go:126] duration metric: took 202.797974ms to wait for k8s-apps to be running ...
	I0725 19:10:48.108936   66554 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 19:10:48.108980   66554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:10:48.123106   66554 system_svc.go:56] duration metric: took 14.159244ms WaitForService to wait for kubelet
	I0725 19:10:48.123138   66554 kubeadm.go:582] duration metric: took 41.365809654s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:10:48.123161   66554 node_conditions.go:102] verifying NodePressure condition ...
	I0725 19:10:48.307469   66554 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 19:10:48.307509   66554 node_conditions.go:123] node cpu capacity is 2
	I0725 19:10:48.307521   66554 node_conditions.go:105] duration metric: took 184.355671ms to run NodePressure ...
	I0725 19:10:48.307536   66554 start.go:241] waiting for startup goroutines ...
	I0725 19:10:48.307546   66554 start.go:246] waiting for cluster config update ...
	I0725 19:10:48.307571   66554 start.go:255] writing updated cluster config ...
	I0725 19:10:48.308017   66554 ssh_runner.go:195] Run: rm -f paused
	I0725 19:10:48.358608   66554 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 19:10:48.360568   66554 out.go:177] * Done! kubectl is now configured to use "auto-889508" cluster and "default" namespace by default
	I0725 19:10:49.764953   67035 node_ready.go:53] node "kindnet-889508" has status "Ready":"False"
	I0725 19:10:50.766270   67035 node_ready.go:49] node "kindnet-889508" has status "Ready":"True"
	I0725 19:10:50.766298   67035 node_ready.go:38] duration metric: took 16.005893445s for node "kindnet-889508" to be "Ready" ...
	I0725 19:10:50.766310   67035 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:10:50.776704   67035 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zz8t7" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.783589   67035 pod_ready.go:92] pod "coredns-7db6d8ff4d-zz8t7" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:51.783616   67035 pod_ready.go:81] duration metric: took 1.006880713s for pod "coredns-7db6d8ff4d-zz8t7" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.783629   67035 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.788148   67035 pod_ready.go:92] pod "etcd-kindnet-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:51.788175   67035 pod_ready.go:81] duration metric: took 4.538085ms for pod "etcd-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.788192   67035 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.792442   67035 pod_ready.go:92] pod "kube-apiserver-kindnet-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:51.792464   67035 pod_ready.go:81] duration metric: took 4.262867ms for pod "kube-apiserver-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.792475   67035 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.797228   67035 pod_ready.go:92] pod "kube-controller-manager-kindnet-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:51.797241   67035 pod_ready.go:81] duration metric: took 4.758267ms for pod "kube-controller-manager-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.797249   67035 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tmsvj" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.964786   67035 pod_ready.go:92] pod "kube-proxy-tmsvj" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:51.964809   67035 pod_ready.go:81] duration metric: took 167.553443ms for pod "kube-proxy-tmsvj" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:51.964818   67035 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:52.364241   67035 pod_ready.go:92] pod "kube-scheduler-kindnet-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:10:52.364264   67035 pod_ready.go:81] duration metric: took 399.439611ms for pod "kube-scheduler-kindnet-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:10:52.364275   67035 pod_ready.go:38] duration metric: took 1.597948966s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:10:52.364287   67035 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:10:52.364356   67035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:10:52.382145   67035 api_server.go:72] duration metric: took 18.061155447s to wait for apiserver process to appear ...
	I0725 19:10:52.382180   67035 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:10:52.382195   67035 api_server.go:253] Checking apiserver healthz at https://192.168.72.127:8443/healthz ...
	I0725 19:10:52.387394   67035 api_server.go:279] https://192.168.72.127:8443/healthz returned 200:
	ok
	I0725 19:10:52.388299   67035 api_server.go:141] control plane version: v1.30.3
	I0725 19:10:52.388333   67035 api_server.go:131] duration metric: took 6.135458ms to wait for apiserver health ...
	I0725 19:10:52.388343   67035 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 19:10:52.567939   67035 system_pods.go:59] 8 kube-system pods found
	I0725 19:10:52.567975   67035 system_pods.go:61] "coredns-7db6d8ff4d-zz8t7" [7392e94d-30a3-4088-bbb4-181de380ee63] Running
	I0725 19:10:52.567981   67035 system_pods.go:61] "etcd-kindnet-889508" [6d993353-8509-455f-992d-7f0eb11d3628] Running
	I0725 19:10:52.567985   67035 system_pods.go:61] "kindnet-8xhc9" [34a5eade-1f3b-488c-9dd2-13ae10c9622c] Running
	I0725 19:10:52.567992   67035 system_pods.go:61] "kube-apiserver-kindnet-889508" [26085730-7a7d-45e8-b270-84135cc2a701] Running
	I0725 19:10:52.567997   67035 system_pods.go:61] "kube-controller-manager-kindnet-889508" [0d624506-1676-4b2e-b65d-cc1a77130703] Running
	I0725 19:10:52.568001   67035 system_pods.go:61] "kube-proxy-tmsvj" [6e2ce78b-e1d2-4dd9-8e4a-5c3df893d5f6] Running
	I0725 19:10:52.568008   67035 system_pods.go:61] "kube-scheduler-kindnet-889508" [fc13610b-1556-44fc-8425-f0fe4e521526] Running
	I0725 19:10:52.568013   67035 system_pods.go:61] "storage-provisioner" [3ebfc1f8-5537-4382-b7ae-6f94691e2873] Running
	I0725 19:10:52.568024   67035 system_pods.go:74] duration metric: took 179.674651ms to wait for pod list to return data ...
	I0725 19:10:52.568036   67035 default_sa.go:34] waiting for default service account to be created ...
	I0725 19:10:52.764157   67035 default_sa.go:45] found service account: "default"
	I0725 19:10:52.764213   67035 default_sa.go:55] duration metric: took 196.136879ms for default service account to be created ...
	I0725 19:10:52.764236   67035 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 19:10:52.968004   67035 system_pods.go:86] 8 kube-system pods found
	I0725 19:10:52.968032   67035 system_pods.go:89] "coredns-7db6d8ff4d-zz8t7" [7392e94d-30a3-4088-bbb4-181de380ee63] Running
	I0725 19:10:52.968037   67035 system_pods.go:89] "etcd-kindnet-889508" [6d993353-8509-455f-992d-7f0eb11d3628] Running
	I0725 19:10:52.968041   67035 system_pods.go:89] "kindnet-8xhc9" [34a5eade-1f3b-488c-9dd2-13ae10c9622c] Running
	I0725 19:10:52.968045   67035 system_pods.go:89] "kube-apiserver-kindnet-889508" [26085730-7a7d-45e8-b270-84135cc2a701] Running
	I0725 19:10:52.968050   67035 system_pods.go:89] "kube-controller-manager-kindnet-889508" [0d624506-1676-4b2e-b65d-cc1a77130703] Running
	I0725 19:10:52.968054   67035 system_pods.go:89] "kube-proxy-tmsvj" [6e2ce78b-e1d2-4dd9-8e4a-5c3df893d5f6] Running
	I0725 19:10:52.968058   67035 system_pods.go:89] "kube-scheduler-kindnet-889508" [fc13610b-1556-44fc-8425-f0fe4e521526] Running
	I0725 19:10:52.968062   67035 system_pods.go:89] "storage-provisioner" [3ebfc1f8-5537-4382-b7ae-6f94691e2873] Running
	I0725 19:10:52.968068   67035 system_pods.go:126] duration metric: took 203.827452ms to wait for k8s-apps to be running ...
	I0725 19:10:52.968076   67035 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 19:10:52.968116   67035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:10:52.982651   67035 system_svc.go:56] duration metric: took 14.566123ms WaitForService to wait for kubelet
	I0725 19:10:52.982686   67035 kubeadm.go:582] duration metric: took 18.661700533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:10:52.982711   67035 node_conditions.go:102] verifying NodePressure condition ...
	I0725 19:10:53.164643   67035 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 19:10:53.164672   67035 node_conditions.go:123] node cpu capacity is 2
	I0725 19:10:53.164683   67035 node_conditions.go:105] duration metric: took 181.96693ms to run NodePressure ...
	I0725 19:10:53.164693   67035 start.go:241] waiting for startup goroutines ...
	I0725 19:10:53.164700   67035 start.go:246] waiting for cluster config update ...
	I0725 19:10:53.164708   67035 start.go:255] writing updated cluster config ...
	I0725 19:10:53.164949   67035 ssh_runner.go:195] Run: rm -f paused
	I0725 19:10:53.213848   67035 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 19:10:53.216039   67035 out.go:177] * Done! kubectl is now configured to use "kindnet-889508" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.405596232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934667405563499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed7396e6-f79f-48c9-9d75-7c64d162b1c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.406332216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07c7236e-4a43-4324-b00a-cee1cfc23a26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.406403943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07c7236e-4a43-4324-b00a-cee1cfc23a26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.406974666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07c7236e-4a43-4324-b00a-cee1cfc23a26 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.450335017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9bc78f3-f914-424c-8caf-3219e81c0261 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.450426836Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9bc78f3-f914-424c-8caf-3219e81c0261 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.451520705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=072178f2-121a-4bd5-b152-5dd102d28cfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.452022964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934667451996681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=072178f2-121a-4bd5-b152-5dd102d28cfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.452673881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82766da7-f80d-4415-ac47-bcbfb7821538 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.452745022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82766da7-f80d-4415-ac47-bcbfb7821538 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.452995798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82766da7-f80d-4415-ac47-bcbfb7821538 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.495476387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b69de70-98ee-4b90-a81d-a91deb673aef name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.495607620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b69de70-98ee-4b90-a81d-a91deb673aef name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.497575545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2249295a-b6d8-4b9a-b348-3acd21e67e61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.498230089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934667498199149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2249295a-b6d8-4b9a-b348-3acd21e67e61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.498914370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec75420e-0a7e-48e4-b628-38655880221a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.499047148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec75420e-0a7e-48e4-b628-38655880221a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.499680375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec75420e-0a7e-48e4-b628-38655880221a name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.544718209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f7f69b2-468e-4269-bbba-3d9a9a2e5e64 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.545029719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f7f69b2-468e-4269-bbba-3d9a9a2e5e64 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.546443477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f63a9513-43bb-4ce7-83e8-0ce00b048080 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.546876596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934667546848680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f63a9513-43bb-4ce7-83e8-0ce00b048080 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.547790513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41496b69-29af-46bc-b3bd-e1312eeb89eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.547870873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41496b69-29af-46bc-b3bd-e1312eeb89eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:11:07 default-k8s-diff-port-600433 crio[727]: time="2024-07-25 19:11:07.548231513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933420553659976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ab6e673af88268ee06fe9c2b3b7ac098cc58bbcb832927545a17193d7e41636,PodSandboxId:661084db9f623ffc4b6c1e8fa5ce376bc15144c73fdec444b14a710909a5e356,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933400245604126,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 700149fa-1af8-429d-b3c3-f47b06c7e4f0,},Annotations:map[string]string{io.kubernetes.container.hash: 2265fe6b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f,PodSandboxId:9870d6544db5a4fb22a11f1a993445546e33defe7f72c133bb4a4a4628b9e70d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933397392295858,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mfjzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 452e9a58-6c09-4b38-8a0d-40f7b2b013d6,},Annotations:map[string]string{io.kubernetes.container.hash: 40083d68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b,PodSandboxId:c1147915f291074ecedadb6c7ffc9ee5cab7ab76b38111e55e68c88f63d5717c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933389732996259,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-smhmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6cc9bb4-5
72b-4d0e-92a6-47fc71501ade,},Annotations:map[string]string{io.kubernetes.container.hash: 7f5e86bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6,PodSandboxId:b9ae6dd1fced9d8a520f0baed64f8160485ae47c8d5b3403f9212d4bddd094bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933389728932129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bca448b2-d88d-4978-891c
-947f057a331d,},Annotations:map[string]string{io.kubernetes.container.hash: e94e15f3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1,PodSandboxId:e6265e24cb556a6c2c093aaf83ca8dad153108baa32fcb246b03933d54c3c5cf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933385231561441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 85c4b80ac95179ffd0f4589308073115,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd,PodSandboxId:63d0cb46a94667ac4d8cdfa8d73be6186a6bd8b9932462598d54a97d103b5723,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933385226195625,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 6dc78251114ba960548e3208c1b8f7f8,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118,PodSandboxId:684b58e7e432d5d25e329a790b9d5ecfa1bcb511d90e0a9db731808c24121ca2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933385228445364,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: ba0cfda6117a855993585bfe9bf2760e,},Annotations:map[string]string{io.kubernetes.container.hash: 1dcc80dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1,PodSandboxId:cacbb00e2ed6e1e12d78c80c68a4bf4f7a8af3e08bb266c12b5d292d4596ca4e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933385203671313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-600433,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31178a6c8df7add30220227e00cb54
80,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc2c0d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41496b69-29af-46bc-b3bd-e1312eeb89eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2387f4d44d2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   b9ae6dd1fced9       storage-provisioner
	3ab6e673af882       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   661084db9f623       busybox
	b64c5166c6547       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   9870d6544db5a       coredns-7db6d8ff4d-mfjzs
	ef20f38592f5c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   c1147915f2910       kube-proxy-smhmv
	070dd1b58b01a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   b9ae6dd1fced9       storage-provisioner
	de5e9269d9497       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   e6265e24cb556       kube-controller-manager-default-k8s-diff-port-600433
	b6b7ff25c3f04       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   684b58e7e432d       kube-apiserver-default-k8s-diff-port-600433
	0c03165e87eac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   63d0cb46a9466       kube-scheduler-default-k8s-diff-port-600433
	45aafe613d91f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   cacbb00e2ed6e       etcd-default-k8s-diff-port-600433
	
	
	==> coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41830 - 57833 "HINFO IN 5102535641444002316.296120937777839854. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011398844s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-600433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-600433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=default-k8s-diff-port-600433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_41_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:41:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-600433
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:11:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:10:43 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:10:43 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:10:43 +0000   Thu, 25 Jul 2024 18:41:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:10:43 +0000   Thu, 25 Jul 2024 18:49:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.221
	  Hostname:    default-k8s-diff-port-600433
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bc20fba7e1f4954abf42c564b7b937b
	  System UUID:                1bc20fba-7e1f-4954-abf4-2c564b7b937b
	  Boot ID:                    827e04e5-2063-444f-a88c-3db4783360ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-mfjzs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-600433                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-600433             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-600433    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-smhmv                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-600433             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-5js8s                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-600433 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-600433 event: Registered Node default-k8s-diff-port-600433 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-600433 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-600433 event: Registered Node default-k8s-diff-port-600433 in Controller
	
	
	==> dmesg <==
	[Jul25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051215] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038050] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.682507] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.794889] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.512917] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.306395] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.056245] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060594] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.182710] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.160538] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.285715] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.208841] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +1.708600] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +0.065203] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.503494] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.932285] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.760882] kauditd_printk_skb: 62 callbacks suppressed
	[Jul25 18:50] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] <==
	{"level":"warn","ts":"2024-07-25T19:09:43.656968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.426885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-25T19:09:43.657202Z","caller":"traceutil/trace.go:171","msg":"trace[1228586323] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1523; }","duration":"142.726478ms","start":"2024-07-25T19:09:43.514391Z","end":"2024-07-25T19:09:43.657117Z","steps":["trace[1228586323] 'range keys from in-memory index tree'  (duration: 142.350527ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:09:44.436205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.431857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-25T19:09:44.436327Z","caller":"traceutil/trace.go:171","msg":"trace[1523697867] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1524; }","duration":"367.592019ms","start":"2024-07-25T19:09:44.068717Z","end":"2024-07-25T19:09:44.436309Z","steps":["trace[1523697867] 'count revisions from in-memory index tree'  (duration: 367.384453ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:09:44.436716Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T19:09:44.068705Z","time spent":"367.985369ms","remote":"127.0.0.1:47712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":29,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	{"level":"info","ts":"2024-07-25T19:09:47.243926Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1284}
	{"level":"info","ts":"2024-07-25T19:09:47.247466Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1284,"took":"3.274366ms","hash":3608855022,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1224704,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-25T19:09:47.247525Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3608855022,"revision":1284,"compact-revision":1039}
	{"level":"info","ts":"2024-07-25T19:10:10.340111Z","caller":"traceutil/trace.go:171","msg":"trace[303041556] linearizableReadLoop","detail":"{readStateIndex:1830; appliedIndex:1829; }","duration":"420.542352ms","start":"2024-07-25T19:10:09.919536Z","end":"2024-07-25T19:10:10.340078Z","steps":["trace[303041556] 'read index received'  (duration: 326.45419ms)","trace[303041556] 'applied index is now lower than readState.Index'  (duration: 94.08647ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T19:10:10.340398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.81788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-25T19:10:10.340444Z","caller":"traceutil/trace.go:171","msg":"trace[114765141] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1545; }","duration":"420.932365ms","start":"2024-07-25T19:10:09.9195Z","end":"2024-07-25T19:10:10.340432Z","steps":["trace[114765141] 'agreement among raft nodes before linearized reading'  (duration: 420.784243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:10:10.340476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T19:10:09.919487Z","time spent":"420.978029ms","remote":"127.0.0.1:47418","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-07-25T19:10:10.340695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.127276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2024-07-25T19:10:10.340733Z","caller":"traceutil/trace.go:171","msg":"trace[250405153] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1545; }","duration":"159.199709ms","start":"2024-07-25T19:10:10.181526Z","end":"2024-07-25T19:10:10.340726Z","steps":["trace[250405153] 'agreement among raft nodes before linearized reading'  (duration: 159.104438ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T19:10:10.340216Z","caller":"traceutil/trace.go:171","msg":"trace[182601591] transaction","detail":"{read_only:false; response_revision:1545; number_of_response:1; }","duration":"533.64201ms","start":"2024-07-25T19:10:09.806546Z","end":"2024-07-25T19:10:10.340188Z","steps":["trace[182601591] 'process raft request'  (duration: 439.558626ms)","trace[182601591] 'compare'  (duration: 93.611693ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T19:10:10.341311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-25T19:10:09.806528Z","time spent":"534.680507ms","remote":"127.0.0.1:47268","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.221\" mod_revision:1537 > success:<request_put:<key:\"/registry/masterleases/192.168.50.221\" value_size:67 lease:7552132965116906150 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.221\" > >"}
	{"level":"warn","ts":"2024-07-25T19:10:10.609841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.75348ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7552132965116906156 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1544 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:532 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T19:10:10.609941Z","caller":"traceutil/trace.go:171","msg":"trace[283574495] linearizableReadLoop","detail":"{readStateIndex:1831; appliedIndex:1830; }","duration":"263.662413ms","start":"2024-07-25T19:10:10.346265Z","end":"2024-07-25T19:10:10.609927Z","steps":["trace[283574495] 'read index received'  (duration: 126.705121ms)","trace[283574495] 'applied index is now lower than readState.Index'  (duration: 136.955584ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-25T19:10:10.610035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.761027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-25T19:10:10.610071Z","caller":"traceutil/trace.go:171","msg":"trace[1861783554] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:1546; }","duration":"263.828779ms","start":"2024-07-25T19:10:10.346236Z","end":"2024-07-25T19:10:10.610065Z","steps":["trace[1861783554] 'agreement among raft nodes before linearized reading'  (duration: 263.729545ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T19:10:10.610323Z","caller":"traceutil/trace.go:171","msg":"trace[387328388] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"264.770433ms","start":"2024-07-25T19:10:10.345542Z","end":"2024-07-25T19:10:10.610313Z","steps":["trace[387328388] 'process raft request'  (duration: 127.47339ms)","trace[387328388] 'compare'  (duration: 136.671697ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T19:11:03.140391Z","caller":"traceutil/trace.go:171","msg":"trace[1576909342] linearizableReadLoop","detail":"{readStateIndex:1883; appliedIndex:1882; }","duration":"222.503956ms","start":"2024-07-25T19:11:02.917854Z","end":"2024-07-25T19:11:03.140358Z","steps":["trace[1576909342] 'read index received'  (duration: 222.358949ms)","trace[1576909342] 'applied index is now lower than readState.Index'  (duration: 143.991µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T19:11:03.140583Z","caller":"traceutil/trace.go:171","msg":"trace[2128027662] transaction","detail":"{read_only:false; response_revision:1588; number_of_response:1; }","duration":"276.707159ms","start":"2024-07-25T19:11:02.863863Z","end":"2024-07-25T19:11:03.14057Z","steps":["trace[2128027662] 'process raft request'  (duration: 276.365219ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:11:03.140661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.777632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-25T19:11:03.141766Z","caller":"traceutil/trace.go:171","msg":"trace[1663880066] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1588; }","duration":"223.890931ms","start":"2024-07-25T19:11:02.917823Z","end":"2024-07-25T19:11:03.141714Z","steps":["trace[1663880066] 'agreement among raft nodes before linearized reading'  (duration: 222.788818ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:11:07 up 21 min,  0 users,  load average: 0.03, 0.06, 0.06
	Linux default-k8s-diff-port-600433 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] <==
	I0725 19:07:49.473657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:07:49.475993       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:07:49.476101       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:07:49.476194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:09:48.476909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:09:48.477051       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0725 19:09:49.478008       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:09:49.478068       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:09:49.478080       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:09:49.478268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:09:49.478388       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:09:49.479631       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:10:10.341946       1 trace.go:236] Trace[1350170339]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.221,type:*v1.Endpoints,resource:apiServerIPInfo (25-Jul-2024 19:10:09.785) (total time: 556ms):
	Trace[1350170339]: ---"Txn call completed" 535ms (19:10:10.341)
	Trace[1350170339]: [556.067225ms] [556.067225ms] END
	W0725 19:10:49.478321       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:10:49.478685       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:10:49.478732       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:10:49.480883       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:10:49.480949       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:10:49.480956       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] <==
	E0725 19:06:01.343727       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:06:01.854549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:06:03.334194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="299.522µs"
	I0725 19:06:18.333385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="57.631µs"
	E0725 19:06:31.348734       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:06:31.862297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:07:01.353918       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:07:01.869265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:07:31.358360       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:07:31.877086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:01.364053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:08:01.884760       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:31.368470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:08:31.891859       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:09:01.373558       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:09:01.900991       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:09:31.378928       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:09:31.909208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:10:01.383258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:10:01.916812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:10:31.388112       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:10:31.925598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:11:01.394083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:11:01.933942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:11:03.339621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="533.726µs"
	
	
	==> kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] <==
	I0725 18:49:49.931593       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:49:49.947918       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.221"]
	I0725 18:49:50.008178       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:49:50.008225       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:49:50.008242       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:49:50.011236       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:49:50.011434       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:49:50.011445       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:49:50.013106       1 config.go:192] "Starting service config controller"
	I0725 18:49:50.013172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:49:50.013195       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:49:50.013199       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:49:50.013663       1 config.go:319] "Starting node config controller"
	I0725 18:49:50.013684       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:49:50.114940       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:49:50.115280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:49:50.116410       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] <==
	I0725 18:49:46.086594       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:49:48.419056       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:49:48.419149       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:49:48.419161       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:49:48.419167       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:49:48.486402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:49:48.486435       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:49:48.487958       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:49:48.488037       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:49:48.488065       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:49:48.488081       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:49:48.589794       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:08:43 default-k8s-diff-port-600433 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:08:48 default-k8s-diff-port-600433 kubelet[937]: E0725 19:08:48.319085     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:09:03 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:03.319630     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:09:18 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:18.319981     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:09:33 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:33.322766     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:09:43 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:43.343558     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:09:43 default-k8s-diff-port-600433 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:09:43 default-k8s-diff-port-600433 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:09:43 default-k8s-diff-port-600433 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:09:43 default-k8s-diff-port-600433 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:09:44 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:44.319472     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:09:56 default-k8s-diff-port-600433 kubelet[937]: E0725 19:09:56.320105     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:10:09 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:09.320078     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:10:21 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:21.319435     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:10:36 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:36.318872     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:10:43 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:43.343745     937 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:10:43 default-k8s-diff-port-600433 kubelet[937]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:10:43 default-k8s-diff-port-600433 kubelet[937]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:10:43 default-k8s-diff-port-600433 kubelet[937]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:10:43 default-k8s-diff-port-600433 kubelet[937]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:10:51 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:51.337480     937 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:10:51 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:51.337555     937 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:10:51 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:51.337799     937 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5xqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-5js8s_kube-system(1c72ac7a-9a56-4056-80bf-398eeab90b94): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 25 19:10:51 default-k8s-diff-port-600433 kubelet[937]: E0725 19:10:51.337847     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	Jul 25 19:11:03 default-k8s-diff-port-600433 kubelet[937]: E0725 19:11:03.319820     937 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5js8s" podUID="1c72ac7a-9a56-4056-80bf-398eeab90b94"
	
	
	==> storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] <==
	I0725 18:49:49.858088       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:50:19.862633       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] <==
	I0725 18:50:20.672751       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:50:20.686440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:50:20.686655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:50:20.700596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:50:20.700866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a!
	I0725 18:50:20.701171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3281dd58-1ba3-4e8d-af3f-db67d793b109", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a became leader
	I0725 18:50:20.801716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-600433_c6e7b843-a5f6-4764-b122-eea9678b9b6a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5js8s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s: exit status 1 (71.807869ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5js8s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-600433 describe pod metrics-server-569cc877fc-5js8s: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:12:57.823208084 +0000 UTC m=+6251.903939937
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-646344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-646344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (67.853451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-646344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-646344 logs -n 25
E0725 19:12:58.990371   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-646344 logs -n 25: (1.376685946s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo docker                        | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo cat                           | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo                               | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo find                          | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-889508 sudo crio                          | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-889508                                    | kindnet-889508            | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:11 UTC |
	| start   | -p enable-default-cni-889508                         | enable-default-cni-889508 | jenkins | v1.33.1 | 25 Jul 24 19:11 UTC | 25 Jul 24 19:12 UTC |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-889508 pgrep -a                            | calico-889508             | jenkins | v1.33.1 | 25 Jul 24 19:12 UTC | 25 Jul 24 19:12 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-889508                         | enable-default-cni-889508 | jenkins | v1.33.1 | 25 Jul 24 19:12 UTC | 25 Jul 24 19:12 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 19:11:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 19:11:25.479653   70755 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:11:25.480219   70755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:11:25.480234   70755 out.go:304] Setting ErrFile to fd 2...
	I0725 19:11:25.480242   70755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:11:25.480712   70755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 19:11:25.481556   70755 out.go:298] Setting JSON to false
	I0725 19:11:25.482626   70755 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6829,"bootTime":1721927856,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 19:11:25.482681   70755 start.go:139] virtualization: kvm guest
	I0725 19:11:25.484525   70755 out.go:177] * [enable-default-cni-889508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 19:11:25.486136   70755 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:11:25.486155   70755 notify.go:220] Checking for updates...
	I0725 19:11:25.488598   70755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:11:25.489946   70755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:11:25.491092   70755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:11:25.492317   70755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 19:11:25.493518   70755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:11:25.494970   70755 config.go:182] Loaded profile config "calico-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:11:25.495061   70755 config.go:182] Loaded profile config "custom-flannel-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:11:25.495137   70755 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:11:25.495226   70755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:11:25.531345   70755 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 19:11:25.532514   70755 start.go:297] selected driver: kvm2
	I0725 19:11:25.532527   70755 start.go:901] validating driver "kvm2" against <nil>
	I0725 19:11:25.532537   70755 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:11:25.533204   70755 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:11:25.533261   70755 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 19:11:25.548020   70755 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 19:11:25.548062   70755 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0725 19:11:25.548266   70755 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0725 19:11:25.548293   70755 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:11:25.548396   70755 cni.go:84] Creating CNI manager for "bridge"
	I0725 19:11:25.548414   70755 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 19:11:25.548481   70755 start.go:340] cluster config:
	{Name:enable-default-cni-889508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:11:25.548598   70755 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:11:25.551219   70755 out.go:177] * Starting "enable-default-cni-889508" primary control-plane node in "enable-default-cni-889508" cluster
	I0725 19:11:25.858410   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:25.858836   68507 main.go:141] libmachine: (calico-889508) DBG | unable to find current IP address of domain calico-889508 in network mk-calico-889508
	I0725 19:11:25.858859   68507 main.go:141] libmachine: (calico-889508) DBG | I0725 19:11:25.858801   68549 retry.go:31] will retry after 3.504478931s: waiting for machine to come up
	I0725 19:11:29.365410   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:29.365885   68507 main.go:141] libmachine: (calico-889508) DBG | unable to find current IP address of domain calico-889508 in network mk-calico-889508
	I0725 19:11:29.365913   68507 main.go:141] libmachine: (calico-889508) DBG | I0725 19:11:29.365842   68549 retry.go:31] will retry after 3.744672666s: waiting for machine to come up
	I0725 19:11:25.552548   70755 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:11:25.552582   70755 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 19:11:25.552592   70755 cache.go:56] Caching tarball of preloaded images
	I0725 19:11:25.552655   70755 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 19:11:25.552664   70755 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 19:11:25.552754   70755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/config.json ...
	I0725 19:11:25.552773   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/config.json: {Name:mk58cfc62725979b404c8357f124f09ef4bba9c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:25.552889   70755 start.go:360] acquireMachinesLock for enable-default-cni-889508: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 19:11:34.600877   69429 start.go:364] duration metric: took 18.727326061s to acquireMachinesLock for "custom-flannel-889508"
	I0725 19:11:34.600953   69429 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:11:34.601085   69429 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 19:11:33.114552   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.115070   68507 main.go:141] libmachine: (calico-889508) Found IP for machine: 192.168.50.187
	I0725 19:11:33.115103   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has current primary IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.115112   68507 main.go:141] libmachine: (calico-889508) Reserving static IP address...
	I0725 19:11:33.115822   68507 main.go:141] libmachine: (calico-889508) DBG | unable to find host DHCP lease matching {name: "calico-889508", mac: "52:54:00:fe:9b:1c", ip: "192.168.50.187"} in network mk-calico-889508
	I0725 19:11:33.189727   68507 main.go:141] libmachine: (calico-889508) DBG | Getting to WaitForSSH function...
	I0725 19:11:33.189753   68507 main.go:141] libmachine: (calico-889508) Reserved static IP address: 192.168.50.187
	I0725 19:11:33.189766   68507 main.go:141] libmachine: (calico-889508) Waiting for SSH to be available...
	I0725 19:11:33.192471   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.192859   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.192894   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.192964   68507 main.go:141] libmachine: (calico-889508) DBG | Using SSH client type: external
	I0725 19:11:33.192998   68507 main.go:141] libmachine: (calico-889508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa (-rw-------)
	I0725 19:11:33.193027   68507 main.go:141] libmachine: (calico-889508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 19:11:33.193052   68507 main.go:141] libmachine: (calico-889508) DBG | About to run SSH command:
	I0725 19:11:33.193087   68507 main.go:141] libmachine: (calico-889508) DBG | exit 0
	I0725 19:11:33.316386   68507 main.go:141] libmachine: (calico-889508) DBG | SSH cmd err, output: <nil>: 
	I0725 19:11:33.316644   68507 main.go:141] libmachine: (calico-889508) KVM machine creation complete!
	I0725 19:11:33.316965   68507 main.go:141] libmachine: (calico-889508) Calling .GetConfigRaw
	I0725 19:11:33.317528   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:33.317723   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:33.317922   68507 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 19:11:33.317941   68507 main.go:141] libmachine: (calico-889508) Calling .GetState
	I0725 19:11:33.319188   68507 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 19:11:33.319203   68507 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 19:11:33.319209   68507 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 19:11:33.319215   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.321800   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.322137   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.322159   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.322302   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:33.322472   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.322635   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.322771   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:33.322914   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:33.323155   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:33.323170   68507 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 19:11:33.427336   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:11:33.427362   68507 main.go:141] libmachine: Detecting the provisioner...
	I0725 19:11:33.427369   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.430237   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.430710   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.430739   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.430906   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:33.431088   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.431244   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.431353   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:33.431537   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:33.431714   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:33.431728   68507 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 19:11:33.536821   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 19:11:33.536903   68507 main.go:141] libmachine: found compatible host: buildroot
	I0725 19:11:33.536912   68507 main.go:141] libmachine: Provisioning with buildroot...
	I0725 19:11:33.536928   68507 main.go:141] libmachine: (calico-889508) Calling .GetMachineName
	I0725 19:11:33.537166   68507 buildroot.go:166] provisioning hostname "calico-889508"
	I0725 19:11:33.537192   68507 main.go:141] libmachine: (calico-889508) Calling .GetMachineName
	I0725 19:11:33.537375   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.540132   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.540534   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.540558   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.540721   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:33.540901   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.541072   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.541213   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:33.541352   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:33.541512   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:33.541523   68507 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-889508 && echo "calico-889508" | sudo tee /etc/hostname
	I0725 19:11:33.661902   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-889508
	
	I0725 19:11:33.661929   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.664939   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.665269   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.665290   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.665489   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:33.665686   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.665829   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.665937   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:33.666084   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:33.666242   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:33.666257   68507 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-889508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-889508/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-889508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:11:33.776511   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:11:33.776539   68507 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 19:11:33.776580   68507 buildroot.go:174] setting up certificates
	I0725 19:11:33.776602   68507 provision.go:84] configureAuth start
	I0725 19:11:33.776611   68507 main.go:141] libmachine: (calico-889508) Calling .GetMachineName
	I0725 19:11:33.776896   68507 main.go:141] libmachine: (calico-889508) Calling .GetIP
	I0725 19:11:33.779649   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.780084   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.780124   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.780238   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.782503   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.782857   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.782883   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.783001   68507 provision.go:143] copyHostCerts
	I0725 19:11:33.783050   68507 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 19:11:33.783059   68507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 19:11:33.783123   68507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 19:11:33.783213   68507 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 19:11:33.783220   68507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 19:11:33.783244   68507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 19:11:33.783312   68507 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 19:11:33.783323   68507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 19:11:33.783355   68507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 19:11:33.783427   68507 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.calico-889508 san=[127.0.0.1 192.168.50.187 calico-889508 localhost minikube]
	I0725 19:11:33.953997   68507 provision.go:177] copyRemoteCerts
	I0725 19:11:33.954056   68507 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:11:33.954077   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:33.957421   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.957832   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:33.957852   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:33.958121   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:33.958291   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:33.958459   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:33.958560   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:11:34.042979   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 19:11:34.066327   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 19:11:34.088128   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 19:11:34.109679   68507 provision.go:87] duration metric: took 333.066368ms to configureAuth
	I0725 19:11:34.109706   68507 buildroot.go:189] setting minikube options for container-runtime
	I0725 19:11:34.109868   68507 config.go:182] Loaded profile config "calico-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:11:34.109932   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:34.112697   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.113038   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.113064   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.113228   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:34.113438   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.113612   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.113740   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:34.113901   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:34.114071   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:34.114090   68507 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 19:11:34.364964   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 19:11:34.364992   68507 main.go:141] libmachine: Checking connection to Docker...
	I0725 19:11:34.365001   68507 main.go:141] libmachine: (calico-889508) Calling .GetURL
	I0725 19:11:34.366224   68507 main.go:141] libmachine: (calico-889508) DBG | Using libvirt version 6000000
	I0725 19:11:34.368572   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.368938   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.368981   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.369125   68507 main.go:141] libmachine: Docker is up and running!
	I0725 19:11:34.369135   68507 main.go:141] libmachine: Reticulating splines...
	I0725 19:11:34.369142   68507 client.go:171] duration metric: took 24.380443226s to LocalClient.Create
	I0725 19:11:34.369177   68507 start.go:167] duration metric: took 24.380528874s to libmachine.API.Create "calico-889508"
	I0725 19:11:34.369189   68507 start.go:293] postStartSetup for "calico-889508" (driver="kvm2")
	I0725 19:11:34.369203   68507 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:11:34.369217   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:34.369451   68507 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:11:34.369487   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:34.371658   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.372018   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.372041   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.372195   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:34.372372   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.372616   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:34.372784   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:11:34.453857   68507 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:11:34.457679   68507 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 19:11:34.457699   68507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 19:11:34.457751   68507 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 19:11:34.457833   68507 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 19:11:34.457963   68507 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:11:34.466293   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:11:34.489519   68507 start.go:296] duration metric: took 120.314257ms for postStartSetup
	I0725 19:11:34.489575   68507 main.go:141] libmachine: (calico-889508) Calling .GetConfigRaw
	I0725 19:11:34.490102   68507 main.go:141] libmachine: (calico-889508) Calling .GetIP
	I0725 19:11:34.492791   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.493104   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.493142   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.493344   68507 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/config.json ...
	I0725 19:11:34.493505   68507 start.go:128] duration metric: took 24.526235641s to createHost
	I0725 19:11:34.493526   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:34.496586   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.496916   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.496937   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.497110   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:34.497303   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.497466   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.497639   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:34.497810   68507 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:34.497955   68507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0725 19:11:34.497965   68507 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 19:11:34.600745   68507 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721934694.577113145
	
	I0725 19:11:34.600766   68507 fix.go:216] guest clock: 1721934694.577113145
	I0725 19:11:34.600773   68507 fix.go:229] Guest: 2024-07-25 19:11:34.577113145 +0000 UTC Remote: 2024-07-25 19:11:34.493515678 +0000 UTC m=+24.658326191 (delta=83.597467ms)
	I0725 19:11:34.600809   68507 fix.go:200] guest clock delta is within tolerance: 83.597467ms
	I0725 19:11:34.600814   68507 start.go:83] releasing machines lock for "calico-889508", held for 24.633655639s
	I0725 19:11:34.600834   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:34.601103   68507 main.go:141] libmachine: (calico-889508) Calling .GetIP
	I0725 19:11:34.603993   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.604357   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.604384   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.604560   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:34.605047   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:34.605225   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:11:34.605324   68507 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:11:34.605378   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:34.605485   68507 ssh_runner.go:195] Run: cat /version.json
	I0725 19:11:34.605511   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:11:34.608106   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.608399   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.608533   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.608560   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.608664   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:34.608838   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.608876   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:34.608899   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:34.608982   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:34.609070   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:11:34.609147   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:11:34.609225   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:11:34.609360   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:11:34.609601   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:11:34.685555   68507 ssh_runner.go:195] Run: systemctl --version
	I0725 19:11:34.719944   68507 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 19:11:34.603111   69429 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 19:11:34.603335   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:11:34.603383   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:11:34.619702   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0725 19:11:34.620164   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:11:34.620765   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:11:34.620796   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:11:34.621132   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:11:34.621310   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetMachineName
	I0725 19:11:34.621507   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:34.621650   69429 start.go:159] libmachine.API.Create for "custom-flannel-889508" (driver="kvm2")
	I0725 19:11:34.621688   69429 client.go:168] LocalClient.Create starting
	I0725 19:11:34.621727   69429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 19:11:34.621760   69429 main.go:141] libmachine: Decoding PEM data...
	I0725 19:11:34.621782   69429 main.go:141] libmachine: Parsing certificate...
	I0725 19:11:34.621854   69429 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 19:11:34.621878   69429 main.go:141] libmachine: Decoding PEM data...
	I0725 19:11:34.621891   69429 main.go:141] libmachine: Parsing certificate...
	I0725 19:11:34.621912   69429 main.go:141] libmachine: Running pre-create checks...
	I0725 19:11:34.621924   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .PreCreateCheck
	I0725 19:11:34.622294   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetConfigRaw
	I0725 19:11:34.622725   69429 main.go:141] libmachine: Creating machine...
	I0725 19:11:34.622740   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Create
	I0725 19:11:34.622867   69429 main.go:141] libmachine: (custom-flannel-889508) Creating KVM machine...
	I0725 19:11:34.623984   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found existing default KVM network
	I0725 19:11:34.625474   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:34.625315   70840 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f5e0}
	I0725 19:11:34.625500   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | created network xml: 
	I0725 19:11:34.625522   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | <network>
	I0725 19:11:34.625540   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   <name>mk-custom-flannel-889508</name>
	I0725 19:11:34.625553   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   <dns enable='no'/>
	I0725 19:11:34.625563   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   
	I0725 19:11:34.625578   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 19:11:34.625598   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |     <dhcp>
	I0725 19:11:34.625616   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 19:11:34.625627   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |     </dhcp>
	I0725 19:11:34.625636   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   </ip>
	I0725 19:11:34.625646   69429 main.go:141] libmachine: (custom-flannel-889508) DBG |   
	I0725 19:11:34.625655   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | </network>
	I0725 19:11:34.625664   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | 
	I0725 19:11:34.630872   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | trying to create private KVM network mk-custom-flannel-889508 192.168.39.0/24...
	I0725 19:11:34.701433   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | private KVM network mk-custom-flannel-889508 192.168.39.0/24 created
	I0725 19:11:34.701469   69429 main.go:141] libmachine: (custom-flannel-889508) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508 ...
	I0725 19:11:34.701501   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:34.701431   70840 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:11:34.701528   69429 main.go:141] libmachine: (custom-flannel-889508) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 19:11:34.701557   69429 main.go:141] libmachine: (custom-flannel-889508) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 19:11:34.961402   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:34.961275   70840 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa...
	I0725 19:11:35.130611   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:35.130505   70840 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/custom-flannel-889508.rawdisk...
	I0725 19:11:35.130639   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Writing magic tar header
	I0725 19:11:35.130652   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Writing SSH key tar header
	I0725 19:11:35.130737   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:35.130676   70840 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508 ...
	I0725 19:11:35.130800   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508
	I0725 19:11:35.130841   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 19:11:35.130858   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508 (perms=drwx------)
	I0725 19:11:35.130875   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 19:11:35.130886   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 19:11:35.130900   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:11:35.130914   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 19:11:35.130927   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 19:11:35.130939   69429 main.go:141] libmachine: (custom-flannel-889508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 19:11:35.130953   69429 main.go:141] libmachine: (custom-flannel-889508) Creating domain...
	I0725 19:11:35.130966   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 19:11:35.130979   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 19:11:35.130992   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home/jenkins
	I0725 19:11:35.131003   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Checking permissions on dir: /home
	I0725 19:11:35.131015   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Skipping /home - not owner
	I0725 19:11:35.132038   69429 main.go:141] libmachine: (custom-flannel-889508) define libvirt domain using xml: 
	I0725 19:11:35.132060   69429 main.go:141] libmachine: (custom-flannel-889508) <domain type='kvm'>
	I0725 19:11:35.132070   69429 main.go:141] libmachine: (custom-flannel-889508)   <name>custom-flannel-889508</name>
	I0725 19:11:35.132076   69429 main.go:141] libmachine: (custom-flannel-889508)   <memory unit='MiB'>3072</memory>
	I0725 19:11:35.132084   69429 main.go:141] libmachine: (custom-flannel-889508)   <vcpu>2</vcpu>
	I0725 19:11:35.132090   69429 main.go:141] libmachine: (custom-flannel-889508)   <features>
	I0725 19:11:35.132099   69429 main.go:141] libmachine: (custom-flannel-889508)     <acpi/>
	I0725 19:11:35.132106   69429 main.go:141] libmachine: (custom-flannel-889508)     <apic/>
	I0725 19:11:35.132118   69429 main.go:141] libmachine: (custom-flannel-889508)     <pae/>
	I0725 19:11:35.132134   69429 main.go:141] libmachine: (custom-flannel-889508)     
	I0725 19:11:35.132146   69429 main.go:141] libmachine: (custom-flannel-889508)   </features>
	I0725 19:11:35.132158   69429 main.go:141] libmachine: (custom-flannel-889508)   <cpu mode='host-passthrough'>
	I0725 19:11:35.132191   69429 main.go:141] libmachine: (custom-flannel-889508)   
	I0725 19:11:35.132214   69429 main.go:141] libmachine: (custom-flannel-889508)   </cpu>
	I0725 19:11:35.132229   69429 main.go:141] libmachine: (custom-flannel-889508)   <os>
	I0725 19:11:35.132243   69429 main.go:141] libmachine: (custom-flannel-889508)     <type>hvm</type>
	I0725 19:11:35.132272   69429 main.go:141] libmachine: (custom-flannel-889508)     <boot dev='cdrom'/>
	I0725 19:11:35.132286   69429 main.go:141] libmachine: (custom-flannel-889508)     <boot dev='hd'/>
	I0725 19:11:35.132297   69429 main.go:141] libmachine: (custom-flannel-889508)     <bootmenu enable='no'/>
	I0725 19:11:35.132309   69429 main.go:141] libmachine: (custom-flannel-889508)   </os>
	I0725 19:11:35.132328   69429 main.go:141] libmachine: (custom-flannel-889508)   <devices>
	I0725 19:11:35.132347   69429 main.go:141] libmachine: (custom-flannel-889508)     <disk type='file' device='cdrom'>
	I0725 19:11:35.132385   69429 main.go:141] libmachine: (custom-flannel-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/boot2docker.iso'/>
	I0725 19:11:35.132403   69429 main.go:141] libmachine: (custom-flannel-889508)       <target dev='hdc' bus='scsi'/>
	I0725 19:11:35.132426   69429 main.go:141] libmachine: (custom-flannel-889508)       <readonly/>
	I0725 19:11:35.132447   69429 main.go:141] libmachine: (custom-flannel-889508)     </disk>
	I0725 19:11:35.132459   69429 main.go:141] libmachine: (custom-flannel-889508)     <disk type='file' device='disk'>
	I0725 19:11:35.132472   69429 main.go:141] libmachine: (custom-flannel-889508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 19:11:35.132490   69429 main.go:141] libmachine: (custom-flannel-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/custom-flannel-889508.rawdisk'/>
	I0725 19:11:35.132506   69429 main.go:141] libmachine: (custom-flannel-889508)       <target dev='hda' bus='virtio'/>
	I0725 19:11:35.132519   69429 main.go:141] libmachine: (custom-flannel-889508)     </disk>
	I0725 19:11:35.132530   69429 main.go:141] libmachine: (custom-flannel-889508)     <interface type='network'>
	I0725 19:11:35.132543   69429 main.go:141] libmachine: (custom-flannel-889508)       <source network='mk-custom-flannel-889508'/>
	I0725 19:11:35.132555   69429 main.go:141] libmachine: (custom-flannel-889508)       <model type='virtio'/>
	I0725 19:11:35.132584   69429 main.go:141] libmachine: (custom-flannel-889508)     </interface>
	I0725 19:11:35.132607   69429 main.go:141] libmachine: (custom-flannel-889508)     <interface type='network'>
	I0725 19:11:35.132621   69429 main.go:141] libmachine: (custom-flannel-889508)       <source network='default'/>
	I0725 19:11:35.132635   69429 main.go:141] libmachine: (custom-flannel-889508)       <model type='virtio'/>
	I0725 19:11:35.132647   69429 main.go:141] libmachine: (custom-flannel-889508)     </interface>
	I0725 19:11:35.132657   69429 main.go:141] libmachine: (custom-flannel-889508)     <serial type='pty'>
	I0725 19:11:35.132666   69429 main.go:141] libmachine: (custom-flannel-889508)       <target port='0'/>
	I0725 19:11:35.132673   69429 main.go:141] libmachine: (custom-flannel-889508)     </serial>
	I0725 19:11:35.132679   69429 main.go:141] libmachine: (custom-flannel-889508)     <console type='pty'>
	I0725 19:11:35.132686   69429 main.go:141] libmachine: (custom-flannel-889508)       <target type='serial' port='0'/>
	I0725 19:11:35.132698   69429 main.go:141] libmachine: (custom-flannel-889508)     </console>
	I0725 19:11:35.132709   69429 main.go:141] libmachine: (custom-flannel-889508)     <rng model='virtio'>
	I0725 19:11:35.132722   69429 main.go:141] libmachine: (custom-flannel-889508)       <backend model='random'>/dev/random</backend>
	I0725 19:11:35.132740   69429 main.go:141] libmachine: (custom-flannel-889508)     </rng>
	I0725 19:11:35.132759   69429 main.go:141] libmachine: (custom-flannel-889508)     
	I0725 19:11:35.132769   69429 main.go:141] libmachine: (custom-flannel-889508)     
	I0725 19:11:35.132777   69429 main.go:141] libmachine: (custom-flannel-889508)   </devices>
	I0725 19:11:35.132786   69429 main.go:141] libmachine: (custom-flannel-889508) </domain>
	I0725 19:11:35.132813   69429 main.go:141] libmachine: (custom-flannel-889508) 
	I0725 19:11:35.137523   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:08:49:b9 in network default
	I0725 19:11:35.138296   69429 main.go:141] libmachine: (custom-flannel-889508) Ensuring networks are active...
	I0725 19:11:35.138327   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:35.139117   69429 main.go:141] libmachine: (custom-flannel-889508) Ensuring network default is active
	I0725 19:11:35.139486   69429 main.go:141] libmachine: (custom-flannel-889508) Ensuring network mk-custom-flannel-889508 is active
	I0725 19:11:35.139988   69429 main.go:141] libmachine: (custom-flannel-889508) Getting domain xml...
	I0725 19:11:35.140787   69429 main.go:141] libmachine: (custom-flannel-889508) Creating domain...
	I0725 19:11:34.886625   68507 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 19:11:34.892248   68507 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 19:11:34.892359   68507 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:11:34.907403   68507 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 19:11:34.907428   68507 start.go:495] detecting cgroup driver to use...
	I0725 19:11:34.907510   68507 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 19:11:34.927841   68507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 19:11:34.943530   68507 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:11:34.943583   68507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:11:34.960145   68507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:11:34.975962   68507 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:11:35.101076   68507 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:11:35.243356   68507 docker.go:233] disabling docker service ...
	I0725 19:11:35.243436   68507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:11:35.259429   68507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:11:35.273353   68507 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:11:35.429796   68507 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:11:35.563889   68507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:11:35.578683   68507 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:11:35.597545   68507 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 19:11:35.597596   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.607609   68507 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 19:11:35.607677   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.619314   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.629759   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.639930   68507 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:11:35.650137   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.659864   68507 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.676866   68507 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:11:35.686618   68507 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:11:35.695934   68507 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 19:11:35.695993   68507 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 19:11:35.708791   68507 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:11:35.718352   68507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:11:35.838653   68507 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 19:11:35.979791   68507 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 19:11:35.979860   68507 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 19:11:35.985853   68507 start.go:563] Will wait 60s for crictl version
	I0725 19:11:35.985905   68507 ssh_runner.go:195] Run: which crictl
	I0725 19:11:35.990090   68507 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:11:36.030018   68507 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 19:11:36.030087   68507 ssh_runner.go:195] Run: crio --version
	I0725 19:11:36.056727   68507 ssh_runner.go:195] Run: crio --version
	I0725 19:11:36.086050   68507 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 19:11:36.087311   68507 main.go:141] libmachine: (calico-889508) Calling .GetIP
	I0725 19:11:36.090274   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:36.090675   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:11:36.090705   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:11:36.090943   68507 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 19:11:36.094891   68507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:11:36.107777   68507 kubeadm.go:883] updating cluster {Name:calico-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:11:36.107877   68507 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:11:36.107929   68507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:11:36.149746   68507 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 19:11:36.149832   68507 ssh_runner.go:195] Run: which lz4
	I0725 19:11:36.153794   68507 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 19:11:36.157612   68507 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 19:11:36.157639   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 19:11:37.469375   68507 crio.go:462] duration metric: took 1.315613871s to copy over tarball
	I0725 19:11:37.469453   68507 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 19:11:39.796024   68507 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.326538202s)
	I0725 19:11:39.796056   68507 crio.go:469] duration metric: took 2.326655061s to extract the tarball
	I0725 19:11:39.796065   68507 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 19:11:39.834709   68507 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:11:36.451331   69429 main.go:141] libmachine: (custom-flannel-889508) Waiting to get IP...
	I0725 19:11:36.452642   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:36.453216   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:36.453372   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:36.453303   70840 retry.go:31] will retry after 245.765408ms: waiting for machine to come up
	I0725 19:11:36.701137   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:36.701676   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:36.701702   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:36.701631   70840 retry.go:31] will retry after 314.356749ms: waiting for machine to come up
	I0725 19:11:37.017255   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:37.017801   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:37.017826   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:37.017777   70840 retry.go:31] will retry after 385.145666ms: waiting for machine to come up
	I0725 19:11:37.405115   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:37.405746   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:37.405772   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:37.405661   70840 retry.go:31] will retry after 569.487184ms: waiting for machine to come up
	I0725 19:11:37.977094   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:37.977682   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:37.977712   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:37.977636   70840 retry.go:31] will retry after 694.10999ms: waiting for machine to come up
	I0725 19:11:38.673430   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:38.673915   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:38.673947   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:38.673843   70840 retry.go:31] will retry after 778.595119ms: waiting for machine to come up
	I0725 19:11:39.454663   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:39.455304   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:39.455363   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:39.455256   70840 retry.go:31] will retry after 1.090972713s: waiting for machine to come up
	I0725 19:11:40.547930   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:40.548525   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:40.548555   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:40.548478   70840 retry.go:31] will retry after 1.168838705s: waiting for machine to come up
	I0725 19:11:39.881524   68507 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 19:11:39.881550   68507 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:11:39.881561   68507 kubeadm.go:934] updating node { 192.168.50.187 8443 v1.30.3 crio true true} ...
	I0725 19:11:39.881695   68507 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-889508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:calico-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0725 19:11:39.881774   68507 ssh_runner.go:195] Run: crio config
	I0725 19:11:39.929874   68507 cni.go:84] Creating CNI manager for "calico"
	I0725 19:11:39.929901   68507 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:11:39.929931   68507 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.187 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-889508 NodeName:calico-889508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:11:39.930108   68507 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-889508"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:11:39.930189   68507 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:11:39.940258   68507 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:11:39.940349   68507 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:11:39.949343   68507 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0725 19:11:39.964574   68507 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:11:39.979882   68507 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0725 19:11:39.994723   68507 ssh_runner.go:195] Run: grep 192.168.50.187	control-plane.minikube.internal$ /etc/hosts
	I0725 19:11:39.998257   68507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:11:40.009611   68507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:11:40.124833   68507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:11:40.144814   68507 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508 for IP: 192.168.50.187
	I0725 19:11:40.144833   68507 certs.go:194] generating shared ca certs ...
	I0725 19:11:40.144847   68507 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.144976   68507 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 19:11:40.145010   68507 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 19:11:40.145019   68507 certs.go:256] generating profile certs ...
	I0725 19:11:40.145066   68507 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.key
	I0725 19:11:40.145083   68507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.crt with IP's: []
	I0725 19:11:40.212424   68507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.crt ...
	I0725 19:11:40.212454   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.crt: {Name:mk6a67f65c7564641a1436b97a2f9bb5edf5a273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.212625   68507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.key ...
	I0725 19:11:40.212636   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/client.key: {Name:mkc3e04daaab4f49e2879d358f4faff327237156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.212707   68507 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key.e06495d7
	I0725 19:11:40.212722   68507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt.e06495d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.187]
	I0725 19:11:40.300141   68507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt.e06495d7 ...
	I0725 19:11:40.300169   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt.e06495d7: {Name:mk386d4343efc2dc9d8d4bd3f0038772806a5289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.300317   68507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key.e06495d7 ...
	I0725 19:11:40.300354   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key.e06495d7: {Name:mk3acc66373fc947896eb59266687eee67e00e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.300438   68507 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt.e06495d7 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt
	I0725 19:11:40.300522   68507 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key.e06495d7 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key
	I0725 19:11:40.300586   68507 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.key
	I0725 19:11:40.300601   68507 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.crt with IP's: []
	I0725 19:11:40.380073   68507 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.crt ...
	I0725 19:11:40.380100   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.crt: {Name:mk27155583da2d8c780e5ed852ab652b3856be46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.380271   68507 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.key ...
	I0725 19:11:40.380285   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.key: {Name:mkfdaa043d17eea1f36c908e874aa08b3b77b88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:11:40.380482   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 19:11:40.380517   68507 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 19:11:40.380526   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 19:11:40.380550   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 19:11:40.380578   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:11:40.380599   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 19:11:40.380633   68507 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:11:40.381129   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:11:40.404411   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 19:11:40.425505   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:11:40.447276   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:11:40.469858   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 19:11:40.492072   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 19:11:40.513286   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:11:40.535167   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/calico-889508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 19:11:40.557300   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 19:11:40.579585   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:11:40.600857   68507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 19:11:40.623971   68507 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:11:40.640651   68507 ssh_runner.go:195] Run: openssl version
	I0725 19:11:40.646093   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:11:40.655544   68507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:11:40.659684   68507 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:11:40.659737   68507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:11:40.664957   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:11:40.674534   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 19:11:40.684015   68507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 19:11:40.687993   68507 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 19:11:40.688053   68507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 19:11:40.693288   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 19:11:40.702548   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 19:11:40.712232   68507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 19:11:40.716397   68507 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 19:11:40.716458   68507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 19:11:40.721662   68507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:11:40.731422   68507 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:11:40.735118   68507 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:11:40.735166   68507 kubeadm.go:392] StartCluster: {Name:calico-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:calico-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:11:40.735268   68507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 19:11:40.735352   68507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:11:40.770264   68507 cri.go:89] found id: ""
	I0725 19:11:40.770351   68507 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:11:40.780670   68507 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:11:40.790550   68507 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:11:40.800026   68507 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:11:40.800045   68507 kubeadm.go:157] found existing configuration files:
	
	I0725 19:11:40.800096   68507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:11:40.808337   68507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:11:40.808394   68507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:11:40.817246   68507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:11:40.828301   68507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:11:40.828389   68507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:11:40.837737   68507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:11:40.851515   68507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:11:40.851594   68507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:11:40.862112   68507 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:11:40.871323   68507 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:11:40.871372   68507 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:11:40.881189   68507 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 19:11:40.946148   68507 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:11:40.946262   68507 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:11:41.068926   68507 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:11:41.069077   68507 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:11:41.069247   68507 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:11:41.274968   68507 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:11:41.408600   68507 out.go:204]   - Generating certificates and keys ...
	I0725 19:11:41.408752   68507 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:11:41.408845   68507 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:11:41.855612   68507 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:11:41.953228   68507 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:11:42.090834   68507 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:11:42.311349   68507 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:11:42.641656   68507 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:11:42.641827   68507 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-889508 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0725 19:11:42.923368   68507 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:11:42.923672   68507 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-889508 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0725 19:11:43.114171   68507 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:11:43.170023   68507 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:11:43.345219   68507 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:11:43.345600   68507 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:11:43.481696   68507 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:11:43.612874   68507 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:11:43.854223   68507 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:11:44.104017   68507 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:11:44.245867   68507 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:11:44.246635   68507 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:11:44.250541   68507 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:11:44.252573   68507 out.go:204]   - Booting up control plane ...
	I0725 19:11:44.252720   68507 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:11:44.252821   68507 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:11:44.252949   68507 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:11:44.271175   68507 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:11:44.272091   68507 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:11:44.272174   68507 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:11:44.400400   68507 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:11:44.400525   68507 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:11:41.718804   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:41.719364   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:41.719388   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:41.719311   70840 retry.go:31] will retry after 1.669994519s: waiting for machine to come up
	I0725 19:11:43.391327   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:43.391767   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:43.391795   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:43.391726   70840 retry.go:31] will retry after 2.293692775s: waiting for machine to come up
	I0725 19:11:45.687401   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:45.687982   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:45.688010   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:45.687933   70840 retry.go:31] will retry after 2.410008715s: waiting for machine to come up
	I0725 19:11:44.902095   68507 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.056999ms
	I0725 19:11:44.902226   68507 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:11:49.903108   68507 kubeadm.go:310] [api-check] The API server is healthy after 5.002193152s
	I0725 19:11:49.916523   68507 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:11:49.934685   68507 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:11:49.965889   68507 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:11:49.966148   68507 kubeadm.go:310] [mark-control-plane] Marking the node calico-889508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:11:49.979245   68507 kubeadm.go:310] [bootstrap-token] Using token: mbybpd.p2hjf04bamf61ttt
	I0725 19:11:48.100397   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:48.100834   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:48.100856   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:48.100810   70840 retry.go:31] will retry after 3.263479481s: waiting for machine to come up
	I0725 19:11:49.980600   68507 out.go:204]   - Configuring RBAC rules ...
	I0725 19:11:49.980741   68507 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:11:49.995619   68507 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:11:50.005020   68507 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:11:50.008782   68507 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:11:50.015979   68507 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:11:50.019693   68507 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:11:50.307447   68507 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:11:50.742522   68507 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:11:51.308631   68507 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:11:51.309491   68507 kubeadm.go:310] 
	I0725 19:11:51.309591   68507 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:11:51.309607   68507 kubeadm.go:310] 
	I0725 19:11:51.309670   68507 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:11:51.309677   68507 kubeadm.go:310] 
	I0725 19:11:51.309714   68507 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:11:51.309782   68507 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:11:51.309849   68507 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:11:51.309858   68507 kubeadm.go:310] 
	I0725 19:11:51.309902   68507 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:11:51.309908   68507 kubeadm.go:310] 
	I0725 19:11:51.309947   68507 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:11:51.309952   68507 kubeadm.go:310] 
	I0725 19:11:51.310000   68507 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:11:51.310100   68507 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:11:51.310195   68507 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:11:51.310204   68507 kubeadm.go:310] 
	I0725 19:11:51.310267   68507 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:11:51.310351   68507 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:11:51.310357   68507 kubeadm.go:310] 
	I0725 19:11:51.310438   68507 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mbybpd.p2hjf04bamf61ttt \
	I0725 19:11:51.310570   68507 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 19:11:51.310592   68507 kubeadm.go:310] 	--control-plane 
	I0725 19:11:51.310597   68507 kubeadm.go:310] 
	I0725 19:11:51.310708   68507 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:11:51.310734   68507 kubeadm.go:310] 
	I0725 19:11:51.310858   68507 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mbybpd.p2hjf04bamf61ttt \
	I0725 19:11:51.311017   68507 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 19:11:51.311342   68507 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:11:51.311359   68507 cni.go:84] Creating CNI manager for "calico"
	I0725 19:11:51.313141   68507 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0725 19:11:51.314805   68507 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 19:11:51.314824   68507 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253815 bytes)
	I0725 19:11:51.334225   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 19:11:52.516168   68507 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.181912035s)
	I0725 19:11:52.516208   68507 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:11:52.516313   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:52.516360   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-889508 minikube.k8s.io/updated_at=2024_07_25T19_11_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=calico-889508 minikube.k8s.io/primary=true
	I0725 19:11:52.549354   68507 ops.go:34] apiserver oom_adj: -16
	I0725 19:11:52.642972   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:53.143423   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:53.643554   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:54.143668   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:54.643686   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:51.365739   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:51.366186   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:51.366223   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:51.366158   70840 retry.go:31] will retry after 2.811631756s: waiting for machine to come up
	I0725 19:11:54.179737   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:54.180226   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find current IP address of domain custom-flannel-889508 in network mk-custom-flannel-889508
	I0725 19:11:54.180248   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | I0725 19:11:54.180191   70840 retry.go:31] will retry after 3.804402479s: waiting for machine to come up
	I0725 19:11:59.513119   70755 start.go:364] duration metric: took 33.960170742s to acquireMachinesLock for "enable-default-cni-889508"
	I0725 19:11:59.513186   70755 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:11:59.513305   70755 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 19:11:55.143820   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:55.643020   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:56.143433   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:56.643954   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:57.143199   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:57.643995   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:58.143979   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:58.643042   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:59.143336   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:59.643781   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:11:59.515449   70755 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 19:11:59.515733   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:11:59.515785   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:11:59.535395   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42517
	I0725 19:11:59.535768   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:11:59.536408   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:11:59.536435   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:11:59.536785   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:11:59.537055   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetMachineName
	I0725 19:11:59.537242   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:11:59.537418   70755 start.go:159] libmachine.API.Create for "enable-default-cni-889508" (driver="kvm2")
	I0725 19:11:59.537448   70755 client.go:168] LocalClient.Create starting
	I0725 19:11:59.537482   70755 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 19:11:59.537525   70755 main.go:141] libmachine: Decoding PEM data...
	I0725 19:11:59.537551   70755 main.go:141] libmachine: Parsing certificate...
	I0725 19:11:59.537635   70755 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 19:11:59.537666   70755 main.go:141] libmachine: Decoding PEM data...
	I0725 19:11:59.537683   70755 main.go:141] libmachine: Parsing certificate...
	I0725 19:11:59.537707   70755 main.go:141] libmachine: Running pre-create checks...
	I0725 19:11:59.537719   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .PreCreateCheck
	I0725 19:11:59.538119   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetConfigRaw
	I0725 19:11:59.538604   70755 main.go:141] libmachine: Creating machine...
	I0725 19:11:59.538623   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Create
	I0725 19:11:59.538773   70755 main.go:141] libmachine: (enable-default-cni-889508) Creating KVM machine...
	I0725 19:11:59.540167   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found existing default KVM network
	I0725 19:11:59.541946   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.541777   71081 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:71:e3} reservation:<nil>}
	I0725 19:11:59.543031   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.542958   71081 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d9:11:0f} reservation:<nil>}
	I0725 19:11:59.543725   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.543654   71081 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:c9:4f} reservation:<nil>}
	I0725 19:11:59.544887   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.544778   71081 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3950}
	I0725 19:11:59.544905   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | created network xml: 
	I0725 19:11:59.544915   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | <network>
	I0725 19:11:59.544924   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   <name>mk-enable-default-cni-889508</name>
	I0725 19:11:59.544935   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   <dns enable='no'/>
	I0725 19:11:59.544945   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   
	I0725 19:11:59.544957   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0725 19:11:59.544980   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |     <dhcp>
	I0725 19:11:59.545011   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0725 19:11:59.545050   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |     </dhcp>
	I0725 19:11:59.545080   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   </ip>
	I0725 19:11:59.545106   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG |   
	I0725 19:11:59.545120   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | </network>
	I0725 19:11:59.545127   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | 
	I0725 19:11:59.550514   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | trying to create private KVM network mk-enable-default-cni-889508 192.168.72.0/24...
	I0725 19:11:59.621738   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | private KVM network mk-enable-default-cni-889508 192.168.72.0/24 created
	I0725 19:11:59.621768   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.621693   71081 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:11:59.621781   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508 ...
	I0725 19:11:59.621798   70755 main.go:141] libmachine: (enable-default-cni-889508) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 19:11:59.621880   70755 main.go:141] libmachine: (enable-default-cni-889508) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 19:11:59.875685   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.875564   71081 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa...
	I0725 19:11:59.969717   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.969576   71081 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/enable-default-cni-889508.rawdisk...
	I0725 19:11:59.969747   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Writing magic tar header
	I0725 19:11:59.969760   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Writing SSH key tar header
	I0725 19:11:59.969772   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:11:59.969744   71081 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508 ...
	I0725 19:11:59.969952   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508
	I0725 19:11:59.970023   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508 (perms=drwx------)
	I0725 19:11:59.970045   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 19:11:59.970066   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:11:59.970081   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 19:11:59.970104   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 19:11:59.970117   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home/jenkins
	I0725 19:11:59.970128   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Checking permissions on dir: /home
	I0725 19:11:59.970142   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Skipping /home - not owner
	I0725 19:11:59.970166   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 19:11:59.970181   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 19:11:59.970199   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 19:11:59.970216   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 19:11:59.970237   70755 main.go:141] libmachine: (enable-default-cni-889508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 19:11:59.970248   70755 main.go:141] libmachine: (enable-default-cni-889508) Creating domain...
	I0725 19:11:59.971225   70755 main.go:141] libmachine: (enable-default-cni-889508) define libvirt domain using xml: 
	I0725 19:11:59.971242   70755 main.go:141] libmachine: (enable-default-cni-889508) <domain type='kvm'>
	I0725 19:11:59.971254   70755 main.go:141] libmachine: (enable-default-cni-889508)   <name>enable-default-cni-889508</name>
	I0725 19:11:59.971262   70755 main.go:141] libmachine: (enable-default-cni-889508)   <memory unit='MiB'>3072</memory>
	I0725 19:11:59.971272   70755 main.go:141] libmachine: (enable-default-cni-889508)   <vcpu>2</vcpu>
	I0725 19:11:59.971278   70755 main.go:141] libmachine: (enable-default-cni-889508)   <features>
	I0725 19:11:59.971294   70755 main.go:141] libmachine: (enable-default-cni-889508)     <acpi/>
	I0725 19:11:59.971313   70755 main.go:141] libmachine: (enable-default-cni-889508)     <apic/>
	I0725 19:11:59.971339   70755 main.go:141] libmachine: (enable-default-cni-889508)     <pae/>
	I0725 19:11:59.971354   70755 main.go:141] libmachine: (enable-default-cni-889508)     
	I0725 19:11:59.971363   70755 main.go:141] libmachine: (enable-default-cni-889508)   </features>
	I0725 19:11:59.971389   70755 main.go:141] libmachine: (enable-default-cni-889508)   <cpu mode='host-passthrough'>
	I0725 19:11:59.971401   70755 main.go:141] libmachine: (enable-default-cni-889508)   
	I0725 19:11:59.971408   70755 main.go:141] libmachine: (enable-default-cni-889508)   </cpu>
	I0725 19:11:59.971415   70755 main.go:141] libmachine: (enable-default-cni-889508)   <os>
	I0725 19:11:59.971423   70755 main.go:141] libmachine: (enable-default-cni-889508)     <type>hvm</type>
	I0725 19:11:59.971432   70755 main.go:141] libmachine: (enable-default-cni-889508)     <boot dev='cdrom'/>
	I0725 19:11:59.971438   70755 main.go:141] libmachine: (enable-default-cni-889508)     <boot dev='hd'/>
	I0725 19:11:59.971447   70755 main.go:141] libmachine: (enable-default-cni-889508)     <bootmenu enable='no'/>
	I0725 19:11:59.971460   70755 main.go:141] libmachine: (enable-default-cni-889508)   </os>
	I0725 19:11:59.971469   70755 main.go:141] libmachine: (enable-default-cni-889508)   <devices>
	I0725 19:11:59.971477   70755 main.go:141] libmachine: (enable-default-cni-889508)     <disk type='file' device='cdrom'>
	I0725 19:11:59.971495   70755 main.go:141] libmachine: (enable-default-cni-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/boot2docker.iso'/>
	I0725 19:11:59.971505   70755 main.go:141] libmachine: (enable-default-cni-889508)       <target dev='hdc' bus='scsi'/>
	I0725 19:11:59.971517   70755 main.go:141] libmachine: (enable-default-cni-889508)       <readonly/>
	I0725 19:11:59.971529   70755 main.go:141] libmachine: (enable-default-cni-889508)     </disk>
	I0725 19:11:59.971540   70755 main.go:141] libmachine: (enable-default-cni-889508)     <disk type='file' device='disk'>
	I0725 19:11:59.971554   70755 main.go:141] libmachine: (enable-default-cni-889508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 19:11:59.971571   70755 main.go:141] libmachine: (enable-default-cni-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/enable-default-cni-889508.rawdisk'/>
	I0725 19:11:59.971582   70755 main.go:141] libmachine: (enable-default-cni-889508)       <target dev='hda' bus='virtio'/>
	I0725 19:11:59.971592   70755 main.go:141] libmachine: (enable-default-cni-889508)     </disk>
	I0725 19:11:59.971615   70755 main.go:141] libmachine: (enable-default-cni-889508)     <interface type='network'>
	I0725 19:11:59.971639   70755 main.go:141] libmachine: (enable-default-cni-889508)       <source network='mk-enable-default-cni-889508'/>
	I0725 19:11:59.971657   70755 main.go:141] libmachine: (enable-default-cni-889508)       <model type='virtio'/>
	I0725 19:11:59.971668   70755 main.go:141] libmachine: (enable-default-cni-889508)     </interface>
	I0725 19:11:59.971677   70755 main.go:141] libmachine: (enable-default-cni-889508)     <interface type='network'>
	I0725 19:11:59.971689   70755 main.go:141] libmachine: (enable-default-cni-889508)       <source network='default'/>
	I0725 19:11:59.971701   70755 main.go:141] libmachine: (enable-default-cni-889508)       <model type='virtio'/>
	I0725 19:11:59.971715   70755 main.go:141] libmachine: (enable-default-cni-889508)     </interface>
	I0725 19:11:59.971727   70755 main.go:141] libmachine: (enable-default-cni-889508)     <serial type='pty'>
	I0725 19:11:59.971740   70755 main.go:141] libmachine: (enable-default-cni-889508)       <target port='0'/>
	I0725 19:11:59.971750   70755 main.go:141] libmachine: (enable-default-cni-889508)     </serial>
	I0725 19:11:59.971759   70755 main.go:141] libmachine: (enable-default-cni-889508)     <console type='pty'>
	I0725 19:11:59.971770   70755 main.go:141] libmachine: (enable-default-cni-889508)       <target type='serial' port='0'/>
	I0725 19:11:59.971779   70755 main.go:141] libmachine: (enable-default-cni-889508)     </console>
	I0725 19:11:59.971803   70755 main.go:141] libmachine: (enable-default-cni-889508)     <rng model='virtio'>
	I0725 19:11:59.971820   70755 main.go:141] libmachine: (enable-default-cni-889508)       <backend model='random'>/dev/random</backend>
	I0725 19:11:59.971832   70755 main.go:141] libmachine: (enable-default-cni-889508)     </rng>
	I0725 19:11:59.971842   70755 main.go:141] libmachine: (enable-default-cni-889508)     
	I0725 19:11:59.971850   70755 main.go:141] libmachine: (enable-default-cni-889508)     
	I0725 19:11:59.971861   70755 main.go:141] libmachine: (enable-default-cni-889508)   </devices>
	I0725 19:11:59.971881   70755 main.go:141] libmachine: (enable-default-cni-889508) </domain>
	I0725 19:11:59.971898   70755 main.go:141] libmachine: (enable-default-cni-889508) 
	I0725 19:11:59.979303   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:a1:d1:6d in network default
	I0725 19:11:59.980067   70755 main.go:141] libmachine: (enable-default-cni-889508) Ensuring networks are active...
	I0725 19:11:59.980115   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:11:59.980877   70755 main.go:141] libmachine: (enable-default-cni-889508) Ensuring network default is active
	I0725 19:11:59.981263   70755 main.go:141] libmachine: (enable-default-cni-889508) Ensuring network mk-enable-default-cni-889508 is active
	I0725 19:11:59.981847   70755 main.go:141] libmachine: (enable-default-cni-889508) Getting domain xml...
	I0725 19:11:59.982542   70755 main.go:141] libmachine: (enable-default-cni-889508) Creating domain...
	I0725 19:11:57.986927   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:57.987449   69429 main.go:141] libmachine: (custom-flannel-889508) Found IP for machine: 192.168.39.248
	I0725 19:11:57.987474   69429 main.go:141] libmachine: (custom-flannel-889508) Reserving static IP address...
	I0725 19:11:57.987491   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has current primary IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:57.987899   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | unable to find host DHCP lease matching {name: "custom-flannel-889508", mac: "52:54:00:21:5a:79", ip: "192.168.39.248"} in network mk-custom-flannel-889508
	I0725 19:11:58.061404   69429 main.go:141] libmachine: (custom-flannel-889508) Reserved static IP address: 192.168.39.248
	I0725 19:11:58.061436   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Getting to WaitForSSH function...
	I0725 19:11:58.061445   69429 main.go:141] libmachine: (custom-flannel-889508) Waiting for SSH to be available...
	I0725 19:11:58.064512   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.064967   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.064996   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.065119   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Using SSH client type: external
	I0725 19:11:58.065157   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa (-rw-------)
	I0725 19:11:58.065206   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 19:11:58.065226   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | About to run SSH command:
	I0725 19:11:58.065246   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | exit 0
	I0725 19:11:58.188301   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | SSH cmd err, output: <nil>: 
	I0725 19:11:58.188633   69429 main.go:141] libmachine: (custom-flannel-889508) KVM machine creation complete!
	I0725 19:11:58.188903   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetConfigRaw
	I0725 19:11:58.189590   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:58.189826   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:58.190001   69429 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 19:11:58.190018   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetState
	I0725 19:11:58.191487   69429 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 19:11:58.191504   69429 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 19:11:58.191512   69429 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 19:11:58.191521   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.194347   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.194758   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.194789   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.194935   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:58.195103   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.195265   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.195399   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:58.195539   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:58.195718   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:58.195728   69429 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 19:11:58.299625   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:11:58.299675   69429 main.go:141] libmachine: Detecting the provisioner...
	I0725 19:11:58.299688   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.302476   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.302869   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.302890   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.303037   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:58.303245   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.303425   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.303542   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:58.303766   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:58.303976   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:58.303989   69429 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 19:11:58.409219   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 19:11:58.409287   69429 main.go:141] libmachine: found compatible host: buildroot
	I0725 19:11:58.409302   69429 main.go:141] libmachine: Provisioning with buildroot...
	I0725 19:11:58.409310   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetMachineName
	I0725 19:11:58.409587   69429 buildroot.go:166] provisioning hostname "custom-flannel-889508"
	I0725 19:11:58.409615   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetMachineName
	I0725 19:11:58.409785   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.412481   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.412890   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.412938   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.413113   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:58.413323   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.413519   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.413744   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:58.413940   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:58.414128   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:58.414148   69429 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-889508 && echo "custom-flannel-889508" | sudo tee /etc/hostname
	I0725 19:11:58.530686   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-889508
	
	I0725 19:11:58.530712   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.533476   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.533862   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.533890   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.534104   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:58.534305   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.534507   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.534733   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:58.534929   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:58.535090   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:58.535106   69429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-889508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-889508/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-889508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:11:58.644303   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:11:58.644393   69429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 19:11:58.644421   69429 buildroot.go:174] setting up certificates
	I0725 19:11:58.644442   69429 provision.go:84] configureAuth start
	I0725 19:11:58.644459   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetMachineName
	I0725 19:11:58.644786   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetIP
	I0725 19:11:58.647836   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.648366   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.648399   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.648559   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.651192   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.651613   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.651631   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.651809   69429 provision.go:143] copyHostCerts
	I0725 19:11:58.651865   69429 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 19:11:58.651874   69429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 19:11:58.651926   69429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 19:11:58.652016   69429 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 19:11:58.652024   69429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 19:11:58.652044   69429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 19:11:58.652101   69429 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 19:11:58.652108   69429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 19:11:58.652125   69429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 19:11:58.652179   69429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-889508 san=[127.0.0.1 192.168.39.248 custom-flannel-889508 localhost minikube]
	I0725 19:11:58.859078   69429 provision.go:177] copyRemoteCerts
	I0725 19:11:58.859153   69429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:11:58.859176   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:58.862042   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.862325   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:58.862361   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:58.862550   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:58.862756   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:58.862887   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:58.863023   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:11:58.946104   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 19:11:58.968943   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0725 19:11:58.991238   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 19:11:59.013987   69429 provision.go:87] duration metric: took 369.527983ms to configureAuth
	I0725 19:11:59.014012   69429 buildroot.go:189] setting minikube options for container-runtime
	I0725 19:11:59.014195   69429 config.go:182] Loaded profile config "custom-flannel-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:11:59.014263   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:59.016890   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.017211   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.017245   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.017360   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:59.017585   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.017766   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.017961   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:59.018132   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:59.018289   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:59.018311   69429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 19:11:59.276308   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 19:11:59.276369   69429 main.go:141] libmachine: Checking connection to Docker...
	I0725 19:11:59.276380   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetURL
	I0725 19:11:59.277939   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Using libvirt version 6000000
	I0725 19:11:59.280170   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.280566   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.280598   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.280740   69429 main.go:141] libmachine: Docker is up and running!
	I0725 19:11:59.280755   69429 main.go:141] libmachine: Reticulating splines...
	I0725 19:11:59.280765   69429 client.go:171] duration metric: took 24.659064445s to LocalClient.Create
	I0725 19:11:59.280789   69429 start.go:167] duration metric: took 24.659138528s to libmachine.API.Create "custom-flannel-889508"
	I0725 19:11:59.280800   69429 start.go:293] postStartSetup for "custom-flannel-889508" (driver="kvm2")
	I0725 19:11:59.280813   69429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:11:59.280834   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:59.281062   69429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:11:59.281083   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:59.283108   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.283476   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.283525   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.283644   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:59.283802   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.283964   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:59.284113   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:11:59.366391   69429 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:11:59.370337   69429 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 19:11:59.370361   69429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 19:11:59.370431   69429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 19:11:59.370535   69429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 19:11:59.370675   69429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:11:59.379686   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:11:59.401727   69429 start.go:296] duration metric: took 120.91461ms for postStartSetup
	I0725 19:11:59.401778   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetConfigRaw
	I0725 19:11:59.402307   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetIP
	I0725 19:11:59.404969   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.405398   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.405450   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.405620   69429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/config.json ...
	I0725 19:11:59.405850   69429 start.go:128] duration metric: took 24.804751842s to createHost
	I0725 19:11:59.405875   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:59.408198   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.408632   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.408652   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.408784   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:59.408943   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.409088   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.409216   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:59.409405   69429 main.go:141] libmachine: Using SSH client type: native
	I0725 19:11:59.409561   69429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0725 19:11:59.409574   69429 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 19:11:59.512928   69429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721934719.485942004
	
	I0725 19:11:59.512956   69429 fix.go:216] guest clock: 1721934719.485942004
	I0725 19:11:59.512964   69429 fix.go:229] Guest: 2024-07-25 19:11:59.485942004 +0000 UTC Remote: 2024-07-25 19:11:59.405863365 +0000 UTC m=+43.657832956 (delta=80.078639ms)
	I0725 19:11:59.513002   69429 fix.go:200] guest clock delta is within tolerance: 80.078639ms
	I0725 19:11:59.513008   69429 start.go:83] releasing machines lock for "custom-flannel-889508", held for 24.912092821s
	I0725 19:11:59.513038   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:59.513323   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetIP
	I0725 19:11:59.516580   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.516949   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.516976   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.517159   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:59.517751   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:59.517948   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:11:59.518037   69429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:11:59.518078   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:59.518206   69429 ssh_runner.go:195] Run: cat /version.json
	I0725 19:11:59.518232   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:11:59.520786   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.521028   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.521229   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.521257   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.521424   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:59.521538   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:11:59.521566   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:11:59.521608   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.521693   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:11:59.521776   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:59.521845   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:11:59.521911   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:11:59.521985   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:11:59.522080   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:11:59.639307   69429 ssh_runner.go:195] Run: systemctl --version
	I0725 19:11:59.645569   69429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 19:11:59.819837   69429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 19:11:59.826552   69429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 19:11:59.826609   69429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:11:59.841477   69429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 19:11:59.841496   69429 start.go:495] detecting cgroup driver to use...
	I0725 19:11:59.841562   69429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 19:11:59.862971   69429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 19:11:59.882231   69429 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:11:59.882290   69429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:11:59.897345   69429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:11:59.912358   69429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:12:00.031598   69429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:12:00.194340   69429 docker.go:233] disabling docker service ...
	I0725 19:12:00.194423   69429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:12:00.208921   69429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:12:00.224056   69429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:12:00.374461   69429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:12:00.499681   69429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:12:00.515885   69429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:12:00.535312   69429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 19:12:00.535378   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.546654   69429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 19:12:00.546726   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.557507   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.568507   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.579859   69429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:12:00.590514   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.600469   69429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.617142   69429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:00.628450   69429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:12:00.639635   69429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 19:12:00.639701   69429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 19:12:00.659011   69429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:12:00.678176   69429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:00.808526   69429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 19:12:00.947420   69429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 19:12:00.947499   69429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 19:12:00.952518   69429 start.go:563] Will wait 60s for crictl version
	I0725 19:12:00.952578   69429 ssh_runner.go:195] Run: which crictl
	I0725 19:12:00.956486   69429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:12:00.996349   69429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 19:12:00.996428   69429 ssh_runner.go:195] Run: crio --version
	I0725 19:12:01.023129   69429 ssh_runner.go:195] Run: crio --version
	I0725 19:12:01.060017   69429 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 19:12:00.143439   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:00.643742   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:01.143881   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:01.643452   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:02.143555   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:02.643013   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:03.143106   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:03.644041   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:04.143667   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:04.643118   68507 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:04.834260   68507 kubeadm.go:1113] duration metric: took 12.31800732s to wait for elevateKubeSystemPrivileges
	I0725 19:12:04.834307   68507 kubeadm.go:394] duration metric: took 24.099142452s to StartCluster
	I0725 19:12:04.834329   68507 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:04.834426   68507 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:12:04.836387   68507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:04.836664   68507 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:12:04.836676   68507 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:12:04.836952   68507 config.go:182] Loaded profile config "calico-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:12:04.837005   68507 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:12:04.837068   68507 addons.go:69] Setting storage-provisioner=true in profile "calico-889508"
	I0725 19:12:04.837096   68507 addons.go:234] Setting addon storage-provisioner=true in "calico-889508"
	I0725 19:12:04.837127   68507 host.go:66] Checking if "calico-889508" exists ...
	I0725 19:12:04.837557   68507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:04.837586   68507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:04.837649   68507 addons.go:69] Setting default-storageclass=true in profile "calico-889508"
	I0725 19:12:04.837689   68507 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-889508"
	I0725 19:12:04.838099   68507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:04.838128   68507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:04.838338   68507 out.go:177] * Verifying Kubernetes components...
	I0725 19:12:04.839975   68507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:04.860457   68507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0725 19:12:04.860640   68507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I0725 19:12:04.860920   68507 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:04.861044   68507 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:04.861594   68507 main.go:141] libmachine: Using API Version  1
	I0725 19:12:04.861616   68507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:04.861757   68507 main.go:141] libmachine: Using API Version  1
	I0725 19:12:04.861770   68507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:04.862148   68507 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:04.862225   68507 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:04.862519   68507 main.go:141] libmachine: (calico-889508) Calling .GetState
	I0725 19:12:04.862829   68507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:04.862866   68507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:04.866458   68507 addons.go:234] Setting addon default-storageclass=true in "calico-889508"
	I0725 19:12:04.866500   68507 host.go:66] Checking if "calico-889508" exists ...
	I0725 19:12:04.866878   68507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:04.866895   68507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:04.884048   68507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39441
	I0725 19:12:04.884533   68507 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:04.885079   68507 main.go:141] libmachine: Using API Version  1
	I0725 19:12:04.885103   68507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:04.885684   68507 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:04.885907   68507 main.go:141] libmachine: (calico-889508) Calling .GetState
	I0725 19:12:04.887303   68507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0725 19:12:04.887834   68507 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:04.888210   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:12:04.888409   68507 main.go:141] libmachine: Using API Version  1
	I0725 19:12:04.888426   68507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:04.888811   68507 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:04.889387   68507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:04.889434   68507 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:04.889933   68507 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:12:01.424227   70755 main.go:141] libmachine: (enable-default-cni-889508) Waiting to get IP...
	I0725 19:12:01.426317   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:01.426845   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:01.426885   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:01.426835   71081 retry.go:31] will retry after 187.667174ms: waiting for machine to come up
	I0725 19:12:01.616615   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:01.617147   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:01.617179   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:01.617111   71081 retry.go:31] will retry after 245.424236ms: waiting for machine to come up
	I0725 19:12:01.864654   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:01.865284   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:01.865314   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:01.865245   71081 retry.go:31] will retry after 338.090171ms: waiting for machine to come up
	I0725 19:12:02.204708   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:02.205301   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:02.205336   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:02.205254   71081 retry.go:31] will retry after 469.077149ms: waiting for machine to come up
	I0725 19:12:02.676013   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:02.676618   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:02.676641   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:02.676533   71081 retry.go:31] will retry after 489.519597ms: waiting for machine to come up
	I0725 19:12:03.168024   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:03.168504   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:03.168579   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:03.168465   71081 retry.go:31] will retry after 745.52206ms: waiting for machine to come up
	I0725 19:12:03.915588   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:03.916114   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:03.916139   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:03.916063   71081 retry.go:31] will retry after 980.042698ms: waiting for machine to come up
	I0725 19:12:04.901156   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:04.901684   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:04.901705   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:04.901592   71081 retry.go:31] will retry after 1.448184145s: waiting for machine to come up
	I0725 19:12:01.061347   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetIP
	I0725 19:12:01.064205   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:01.064613   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:12:01.064644   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:01.064885   69429 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 19:12:01.068745   69429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:12:01.085214   69429 kubeadm.go:883] updating cluster {Name:custom-flannel-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:12:01.085340   69429 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:12:01.085395   69429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:12:01.119068   69429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 19:12:01.119125   69429 ssh_runner.go:195] Run: which lz4
	I0725 19:12:01.123097   69429 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 19:12:01.127361   69429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 19:12:01.127396   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 19:12:02.458101   69429 crio.go:462] duration metric: took 1.335036598s to copy over tarball
	I0725 19:12:02.458181   69429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 19:12:05.097993   69429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.639759329s)
	I0725 19:12:05.098023   69429 crio.go:469] duration metric: took 2.639895319s to extract the tarball
	I0725 19:12:05.098032   69429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 19:12:05.144314   69429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:12:05.193421   69429 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 19:12:05.193447   69429 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:12:05.193458   69429 kubeadm.go:934] updating node { 192.168.39.248 8443 v1.30.3 crio true true} ...
	I0725 19:12:05.193598   69429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-889508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0725 19:12:05.193682   69429 ssh_runner.go:195] Run: crio config
	I0725 19:12:05.243196   69429 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0725 19:12:05.243242   69429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:12:05.243275   69429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-889508 NodeName:custom-flannel-889508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:12:05.243455   69429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-889508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:12:05.243521   69429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:12:05.256057   69429 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:12:05.256134   69429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:12:05.267988   69429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0725 19:12:05.287647   69429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:12:05.306881   69429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 19:12:05.326341   69429 ssh_runner.go:195] Run: grep 192.168.39.248	control-plane.minikube.internal$ /etc/hosts
	I0725 19:12:05.331300   69429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:12:05.347597   69429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:05.492876   69429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:12:05.512463   69429 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508 for IP: 192.168.39.248
	I0725 19:12:05.512488   69429 certs.go:194] generating shared ca certs ...
	I0725 19:12:05.512511   69429 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:05.512722   69429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 19:12:05.512783   69429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 19:12:05.512797   69429 certs.go:256] generating profile certs ...
	I0725 19:12:05.512865   69429 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.key
	I0725 19:12:05.512883   69429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.crt with IP's: []
	I0725 19:12:05.749841   69429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.crt ...
	I0725 19:12:05.749868   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.crt: {Name:mk2ce6d262a980aebd93150c3eb5ea894b5efd2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:05.750021   69429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.key ...
	I0725 19:12:05.750032   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/client.key: {Name:mkba1cf47ee22ca1fe2cb45c50a087a9495b6d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:05.750106   69429 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key.fa663edc
	I0725 19:12:05.750122   69429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt.fa663edc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.248]
	I0725 19:12:04.891170   68507 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:04.891185   68507 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:12:04.891203   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:12:04.894889   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:12:04.895308   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:12:04.895325   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:12:04.895609   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:12:04.895786   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:12:04.895930   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:12:04.896061   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:12:04.910349   68507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0725 19:12:04.910882   68507 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:04.911386   68507 main.go:141] libmachine: Using API Version  1
	I0725 19:12:04.911408   68507 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:04.911878   68507 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:04.912129   68507 main.go:141] libmachine: (calico-889508) Calling .GetState
	I0725 19:12:04.913987   68507 main.go:141] libmachine: (calico-889508) Calling .DriverName
	I0725 19:12:04.914203   68507 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:04.914221   68507 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:12:04.914246   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHHostname
	I0725 19:12:04.917325   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:12:04.917815   68507 main.go:141] libmachine: (calico-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:9b:1c", ip: ""} in network mk-calico-889508: {Iface:virbr2 ExpiryTime:2024-07-25 20:11:25 +0000 UTC Type:0 Mac:52:54:00:fe:9b:1c Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:calico-889508 Clientid:01:52:54:00:fe:9b:1c}
	I0725 19:12:04.917844   68507 main.go:141] libmachine: (calico-889508) DBG | domain calico-889508 has defined IP address 192.168.50.187 and MAC address 52:54:00:fe:9b:1c in network mk-calico-889508
	I0725 19:12:04.918143   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHPort
	I0725 19:12:04.918316   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHKeyPath
	I0725 19:12:04.918509   68507 main.go:141] libmachine: (calico-889508) Calling .GetSSHUsername
	I0725 19:12:04.918652   68507 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/calico-889508/id_rsa Username:docker}
	I0725 19:12:05.167895   68507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:12:05.168075   68507 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:12:05.307597   68507 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:05.390217   68507 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:05.541538   68507 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0725 19:12:05.544399   68507 node_ready.go:35] waiting up to 15m0s for node "calico-889508" to be "Ready" ...
	I0725 19:12:05.863887   68507 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:05.863951   68507 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:05.863959   68507 main.go:141] libmachine: (calico-889508) Calling .Close
	I0725 19:12:05.863966   68507 main.go:141] libmachine: (calico-889508) Calling .Close
	I0725 19:12:05.864282   68507 main.go:141] libmachine: (calico-889508) DBG | Closing plugin on server side
	I0725 19:12:05.864284   68507 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:05.864305   68507 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:05.864315   68507 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:05.864333   68507 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:05.864340   68507 main.go:141] libmachine: (calico-889508) Calling .Close
	I0725 19:12:05.864346   68507 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:05.864354   68507 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:05.864361   68507 main.go:141] libmachine: (calico-889508) Calling .Close
	I0725 19:12:05.864584   68507 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:05.864594   68507 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:05.864611   68507 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:05.864656   68507 main.go:141] libmachine: (calico-889508) DBG | Closing plugin on server side
	I0725 19:12:05.864598   68507 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:05.875978   68507 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:05.876002   68507 main.go:141] libmachine: (calico-889508) Calling .Close
	I0725 19:12:05.876286   68507 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:05.876304   68507 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:05.877992   68507 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 19:12:05.879160   68507 addons.go:510] duration metric: took 1.042153267s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0725 19:12:06.046501   68507 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-889508" context rescaled to 1 replicas
	I0725 19:12:07.548500   68507 node_ready.go:53] node "calico-889508" has status "Ready":"False"
	I0725 19:12:09.549211   68507 node_ready.go:53] node "calico-889508" has status "Ready":"False"
	I0725 19:12:06.351781   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:06.352314   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:06.352350   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:06.352248   71081 retry.go:31] will retry after 1.861516663s: waiting for machine to come up
	I0725 19:12:08.215515   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:08.216164   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:08.216196   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:08.216100   71081 retry.go:31] will retry after 1.823135964s: waiting for machine to come up
	I0725 19:12:10.041103   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:10.041642   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:10.041664   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:10.041605   71081 retry.go:31] will retry after 2.758202134s: waiting for machine to come up
	I0725 19:12:06.017579   69429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt.fa663edc ...
	I0725 19:12:06.017606   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt.fa663edc: {Name:mkdd615b1e0b9ce040b0d8a40e7592134280b06e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:06.017759   69429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key.fa663edc ...
	I0725 19:12:06.017771   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key.fa663edc: {Name:mk69859a066c1398eea18f709e2d69eae7265731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:06.017839   69429 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt.fa663edc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt
	I0725 19:12:06.017916   69429 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key.fa663edc -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key
	I0725 19:12:06.017974   69429 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.key
	I0725 19:12:06.017989   69429 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.crt with IP's: []
	I0725 19:12:06.175909   69429 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.crt ...
	I0725 19:12:06.175938   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.crt: {Name:mk91529e5ddf98dada3df938d9583d87af539581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:06.176162   69429 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.key ...
	I0725 19:12:06.176186   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.key: {Name:mk8e87e53c174d1432d49809a2cba246a5eb3aab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:06.176409   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 19:12:06.176445   69429 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 19:12:06.176452   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 19:12:06.176471   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 19:12:06.176495   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:12:06.176516   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 19:12:06.176550   69429 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:12:06.177214   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:12:06.207903   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 19:12:06.235131   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:12:06.268897   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:12:06.298142   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 19:12:06.321825   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 19:12:06.350615   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:12:06.373491   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/custom-flannel-889508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 19:12:06.395812   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 19:12:06.422285   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:12:06.446575   69429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 19:12:06.469360   69429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:12:06.485219   69429 ssh_runner.go:195] Run: openssl version
	I0725 19:12:06.490848   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 19:12:06.501199   69429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 19:12:06.505654   69429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 19:12:06.505712   69429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 19:12:06.511074   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:12:06.520951   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:12:06.531439   69429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:06.535861   69429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:06.535926   69429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:06.541506   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:12:06.554242   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 19:12:06.565272   69429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 19:12:06.569347   69429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 19:12:06.569405   69429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 19:12:06.574695   69429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 19:12:06.587050   69429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:12:06.591180   69429 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:12:06.591247   69429 kubeadm.go:392] StartCluster: {Name:custom-flannel-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:custom-flannel-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:12:06.591335   69429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 19:12:06.591385   69429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:12:06.631153   69429 cri.go:89] found id: ""
	I0725 19:12:06.631230   69429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:12:06.641435   69429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:12:06.651687   69429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:12:06.661239   69429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:12:06.661256   69429 kubeadm.go:157] found existing configuration files:
	
	I0725 19:12:06.661308   69429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:12:06.670451   69429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:12:06.670521   69429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:12:06.679815   69429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:12:06.691862   69429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:12:06.691928   69429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:12:06.701379   69429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:12:06.710666   69429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:12:06.710734   69429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:12:06.719865   69429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:12:06.728604   69429 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:12:06.728674   69429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:12:06.737761   69429 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 19:12:06.930631   69429 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:12:11.549400   68507 node_ready.go:53] node "calico-889508" has status "Ready":"False"
	I0725 19:12:13.549971   68507 node_ready.go:53] node "calico-889508" has status "Ready":"False"
	I0725 19:12:14.550112   68507 node_ready.go:49] node "calico-889508" has status "Ready":"True"
	I0725 19:12:14.550134   68507 node_ready.go:38] duration metric: took 9.005700222s for node "calico-889508" to be "Ready" ...
	I0725 19:12:14.550145   68507 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:12:14.571182   68507 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:12.801994   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:12.802537   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:12.802569   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:12.802486   71081 retry.go:31] will retry after 2.645816516s: waiting for machine to come up
	I0725 19:12:15.450084   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:15.450708   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:15.450737   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:15.450658   71081 retry.go:31] will retry after 3.044236718s: waiting for machine to come up
	I0725 19:12:17.896099   69429 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:12:17.896178   69429 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:12:17.896298   69429 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:12:17.896427   69429 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:12:17.896546   69429 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:12:17.896640   69429 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:12:17.898119   69429 out.go:204]   - Generating certificates and keys ...
	I0725 19:12:17.898224   69429 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:12:17.898339   69429 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:12:17.898462   69429 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:12:17.898548   69429 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:12:17.898631   69429 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:12:17.898694   69429 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:12:17.898767   69429 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:12:17.898931   69429 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-889508 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0725 19:12:17.898999   69429 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:12:17.899181   69429 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-889508 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
	I0725 19:12:17.899274   69429 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:12:17.899359   69429 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:12:17.899421   69429 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:12:17.899499   69429 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:12:17.899569   69429 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:12:17.899653   69429 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:12:17.899726   69429 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:12:17.899810   69429 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:12:17.899884   69429 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:12:17.899984   69429 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:12:17.900072   69429 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:12:17.901403   69429 out.go:204]   - Booting up control plane ...
	I0725 19:12:17.901502   69429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:12:17.901622   69429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:12:17.901723   69429 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:12:17.901867   69429 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:12:17.902005   69429 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:12:17.902069   69429 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:12:17.902225   69429 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:12:17.902345   69429 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:12:17.902427   69429 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.310204ms
	I0725 19:12:17.902498   69429 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:12:17.902547   69429 kubeadm.go:310] [api-check] The API server is healthy after 5.501579556s
	I0725 19:12:17.902658   69429 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:12:17.902771   69429 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:12:17.902819   69429 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:12:17.903053   69429 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-889508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:12:17.903148   69429 kubeadm.go:310] [bootstrap-token] Using token: d4h2qq.xk5ydhyz5z093z4f
	I0725 19:12:17.904625   69429 out.go:204]   - Configuring RBAC rules ...
	I0725 19:12:17.904753   69429 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:12:17.904848   69429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:12:17.904974   69429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:12:17.905226   69429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:12:17.905372   69429 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:12:17.905483   69429 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:12:17.905652   69429 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:12:17.905719   69429 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:12:17.905779   69429 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:12:17.905787   69429 kubeadm.go:310] 
	I0725 19:12:17.905863   69429 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:12:17.905877   69429 kubeadm.go:310] 
	I0725 19:12:17.905989   69429 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:12:17.905999   69429 kubeadm.go:310] 
	I0725 19:12:17.906032   69429 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:12:17.906114   69429 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:12:17.906180   69429 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:12:17.906188   69429 kubeadm.go:310] 
	I0725 19:12:17.906262   69429 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:12:17.906271   69429 kubeadm.go:310] 
	I0725 19:12:17.906337   69429 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:12:17.906343   69429 kubeadm.go:310] 
	I0725 19:12:17.906404   69429 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:12:17.906512   69429 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:12:17.906590   69429 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:12:17.906604   69429 kubeadm.go:310] 
	I0725 19:12:17.906707   69429 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:12:17.906806   69429 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:12:17.906820   69429 kubeadm.go:310] 
	I0725 19:12:17.906945   69429 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token d4h2qq.xk5ydhyz5z093z4f \
	I0725 19:12:17.907086   69429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 19:12:17.907116   69429 kubeadm.go:310] 	--control-plane 
	I0725 19:12:17.907123   69429 kubeadm.go:310] 
	I0725 19:12:17.907192   69429 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:12:17.907199   69429 kubeadm.go:310] 
	I0725 19:12:17.907291   69429 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token d4h2qq.xk5ydhyz5z093z4f \
	I0725 19:12:17.907400   69429 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 19:12:17.907415   69429 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0725 19:12:17.909689   69429 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0725 19:12:16.578854   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:19.078334   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:18.497700   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:18.498290   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find current IP address of domain enable-default-cni-889508 in network mk-enable-default-cni-889508
	I0725 19:12:18.498316   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | I0725 19:12:18.498247   71081 retry.go:31] will retry after 3.670596355s: waiting for machine to come up
	I0725 19:12:17.910949   69429 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 19:12:17.911020   69429 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0725 19:12:17.917547   69429 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0725 19:12:17.917576   69429 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0725 19:12:17.945343   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 19:12:18.466076   69429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:12:18.466228   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-889508 minikube.k8s.io/updated_at=2024_07_25T19_12_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=custom-flannel-889508 minikube.k8s.io/primary=true
	I0725 19:12:18.466414   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:18.486384   69429 ops.go:34] apiserver oom_adj: -16
	I0725 19:12:18.643245   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:19.143481   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:19.643852   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:20.143352   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:20.643421   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:21.583197   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:24.077447   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:22.171998   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.172598   70755 main.go:141] libmachine: (enable-default-cni-889508) Found IP for machine: 192.168.72.226
	I0725 19:12:22.172625   70755 main.go:141] libmachine: (enable-default-cni-889508) Reserving static IP address...
	I0725 19:12:22.172652   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has current primary IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.173009   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-889508", mac: "52:54:00:27:ef:2e", ip: "192.168.72.226"} in network mk-enable-default-cni-889508
	I0725 19:12:22.254302   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Getting to WaitForSSH function...
	I0725 19:12:22.254341   70755 main.go:141] libmachine: (enable-default-cni-889508) Reserved static IP address: 192.168.72.226
	I0725 19:12:22.254382   70755 main.go:141] libmachine: (enable-default-cni-889508) Waiting for SSH to be available...
	I0725 19:12:22.257505   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.257946   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.257983   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.258118   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Using SSH client type: external
	I0725 19:12:22.258144   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa (-rw-------)
	I0725 19:12:22.258182   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 19:12:22.258205   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | About to run SSH command:
	I0725 19:12:22.258234   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | exit 0
	I0725 19:12:22.392682   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | SSH cmd err, output: <nil>: 
	I0725 19:12:22.392949   70755 main.go:141] libmachine: (enable-default-cni-889508) KVM machine creation complete!
	I0725 19:12:22.393240   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetConfigRaw
	I0725 19:12:22.393822   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:22.394007   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:22.394149   70755 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0725 19:12:22.394165   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetState
	I0725 19:12:22.395760   70755 main.go:141] libmachine: Detecting operating system of created instance...
	I0725 19:12:22.395780   70755 main.go:141] libmachine: Waiting for SSH to be available...
	I0725 19:12:22.395788   70755 main.go:141] libmachine: Getting to WaitForSSH function...
	I0725 19:12:22.395795   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:22.398361   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.398835   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.398872   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.399035   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:22.399194   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.399342   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.399494   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:22.399694   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:22.399936   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:22.399952   70755 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0725 19:12:22.512053   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:12:22.512081   70755 main.go:141] libmachine: Detecting the provisioner...
	I0725 19:12:22.512094   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:22.515297   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.515787   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.515817   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.515972   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:22.516156   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.516354   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.516521   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:22.516688   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:22.516896   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:22.516908   70755 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0725 19:12:22.624690   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0725 19:12:22.624757   70755 main.go:141] libmachine: found compatible host: buildroot
	I0725 19:12:22.624766   70755 main.go:141] libmachine: Provisioning with buildroot...
	I0725 19:12:22.624774   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetMachineName
	I0725 19:12:22.625028   70755 buildroot.go:166] provisioning hostname "enable-default-cni-889508"
	I0725 19:12:22.625062   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetMachineName
	I0725 19:12:22.625261   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:22.628552   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.628937   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.628961   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.629104   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:22.629280   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.629477   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.629630   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:22.629830   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:22.630053   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:22.630072   70755 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-889508 && echo "enable-default-cni-889508" | sudo tee /etc/hostname
	I0725 19:12:22.753979   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-889508
	
	I0725 19:12:22.754005   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:22.757074   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.757433   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.757463   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.757655   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:22.757854   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.758038   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:22.758184   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:22.758352   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:22.758584   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:22.758616   70755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-889508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-889508/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-889508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:12:22.873603   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:12:22.873642   70755 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 19:12:22.873687   70755 buildroot.go:174] setting up certificates
	I0725 19:12:22.873709   70755 provision.go:84] configureAuth start
	I0725 19:12:22.873728   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetMachineName
	I0725 19:12:22.874044   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetIP
	I0725 19:12:22.877474   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.877882   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.877915   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.878079   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:22.880978   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.881404   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:22.881433   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:22.881617   70755 provision.go:143] copyHostCerts
	I0725 19:12:22.881675   70755 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 19:12:22.881689   70755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 19:12:22.881747   70755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 19:12:22.881842   70755 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 19:12:22.881851   70755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 19:12:22.881873   70755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 19:12:22.881920   70755 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 19:12:22.881928   70755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 19:12:22.881944   70755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 19:12:22.881982   70755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-889508 san=[127.0.0.1 192.168.72.226 enable-default-cni-889508 localhost minikube]
	I0725 19:12:23.190299   70755 provision.go:177] copyRemoteCerts
	I0725 19:12:23.190390   70755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:12:23.190418   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.194025   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.194370   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.194418   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.194619   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.194839   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.195030   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.195196   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:23.287180   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0725 19:12:23.320131   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 19:12:23.344835   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 19:12:23.370613   70755 provision.go:87] duration metric: took 496.883024ms to configureAuth
	I0725 19:12:23.370642   70755 buildroot.go:189] setting minikube options for container-runtime
	I0725 19:12:23.370818   70755 config.go:182] Loaded profile config "enable-default-cni-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:12:23.370897   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.373581   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.373957   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.373984   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.374126   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.374326   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.374497   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.374673   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.374844   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:23.375090   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:23.375107   70755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 19:12:23.660569   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 19:12:23.660601   70755 main.go:141] libmachine: Checking connection to Docker...
	I0725 19:12:23.660613   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetURL
	I0725 19:12:23.662037   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Using libvirt version 6000000
	I0725 19:12:23.664527   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.664871   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.664898   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.665095   70755 main.go:141] libmachine: Docker is up and running!
	I0725 19:12:23.665110   70755 main.go:141] libmachine: Reticulating splines...
	I0725 19:12:23.665118   70755 client.go:171] duration metric: took 24.127660492s to LocalClient.Create
	I0725 19:12:23.665170   70755 start.go:167] duration metric: took 24.127737157s to libmachine.API.Create "enable-default-cni-889508"
	I0725 19:12:23.665187   70755 start.go:293] postStartSetup for "enable-default-cni-889508" (driver="kvm2")
	I0725 19:12:23.665203   70755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:12:23.665226   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:23.665491   70755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:12:23.665528   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.667485   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.667921   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.667952   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.668112   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.668277   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.668423   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.668596   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:23.750359   70755 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:12:23.754134   70755 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 19:12:23.754154   70755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 19:12:23.754211   70755 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 19:12:23.754275   70755 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 19:12:23.754379   70755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:12:23.764423   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:12:23.787991   70755 start.go:296] duration metric: took 122.786762ms for postStartSetup
	I0725 19:12:23.788040   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetConfigRaw
	I0725 19:12:23.788632   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetIP
	I0725 19:12:23.791380   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.791824   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.791860   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.792055   70755 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/config.json ...
	I0725 19:12:23.792289   70755 start.go:128] duration metric: took 24.278969208s to createHost
	I0725 19:12:23.792343   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.794292   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.794627   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.794654   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.794748   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.794898   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.795034   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.795214   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.795411   70755 main.go:141] libmachine: Using SSH client type: native
	I0725 19:12:23.795562   70755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0725 19:12:23.795571   70755 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 19:12:23.900807   70755 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721934743.874643856
	
	I0725 19:12:23.900853   70755 fix.go:216] guest clock: 1721934743.874643856
	I0725 19:12:23.900863   70755 fix.go:229] Guest: 2024-07-25 19:12:23.874643856 +0000 UTC Remote: 2024-07-25 19:12:23.792308577 +0000 UTC m=+58.344426870 (delta=82.335279ms)
	I0725 19:12:23.900891   70755 fix.go:200] guest clock delta is within tolerance: 82.335279ms
	I0725 19:12:23.900900   70755 start.go:83] releasing machines lock for "enable-default-cni-889508", held for 24.387747083s
	I0725 19:12:23.900927   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:23.901205   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetIP
	I0725 19:12:23.904278   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.904856   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.904878   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.905154   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:23.905646   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:23.905831   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:23.905972   70755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:12:23.906034   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.906093   70755 ssh_runner.go:195] Run: cat /version.json
	I0725 19:12:23.906122   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:23.908998   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.909287   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.909380   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.909408   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.909557   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.909669   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:23.909697   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:23.909736   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.910018   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:23.910043   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.910223   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:23.910280   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:23.910437   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:23.910658   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:24.026708   70755 ssh_runner.go:195] Run: systemctl --version
	I0725 19:12:24.032509   70755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 19:12:24.189054   70755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 19:12:24.195594   70755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 19:12:24.195664   70755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:12:24.211378   70755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 19:12:24.211402   70755 start.go:495] detecting cgroup driver to use...
	I0725 19:12:24.211469   70755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 19:12:24.229966   70755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 19:12:24.243447   70755 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:12:24.243507   70755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:12:24.256940   70755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:12:24.270477   70755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:12:24.384015   70755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:12:24.537722   70755 docker.go:233] disabling docker service ...
	I0725 19:12:24.537811   70755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:12:24.554543   70755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:12:24.571793   70755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:12:24.709878   70755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:12:24.849916   70755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:12:24.866177   70755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:12:24.888736   70755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 19:12:24.888806   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.900511   70755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 19:12:24.900587   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.911595   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.923976   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.935685   70755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:12:24.946558   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.957457   70755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.979454   70755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 19:12:24.991041   70755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:12:25.003800   70755 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 19:12:25.003858   70755 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 19:12:25.016472   70755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:12:25.026453   70755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:25.157116   70755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 19:12:25.297675   70755 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 19:12:25.297800   70755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 19:12:25.303402   70755 start.go:563] Will wait 60s for crictl version
	I0725 19:12:25.303464   70755 ssh_runner.go:195] Run: which crictl
	I0725 19:12:25.307730   70755 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:12:25.347549   70755 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 19:12:25.347633   70755 ssh_runner.go:195] Run: crio --version
	I0725 19:12:25.375292   70755 ssh_runner.go:195] Run: crio --version
	I0725 19:12:25.404094   70755 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 19:12:25.405349   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetIP
	I0725 19:12:25.408063   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:25.408429   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:25.408459   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:25.408648   70755 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 19:12:25.412251   70755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:12:25.423595   70755 kubeadm.go:883] updating cluster {Name:enable-default-cni-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:enable-default-cni-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:12:25.423712   70755 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:12:25.423772   70755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:12:25.453551   70755 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 19:12:25.453628   70755 ssh_runner.go:195] Run: which lz4
	I0725 19:12:25.457501   70755 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 19:12:25.461744   70755 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 19:12:25.461775   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 19:12:21.144363   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:21.644265   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:22.144143   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:22.644064   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:23.143941   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:23.644357   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:24.143329   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:24.643337   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:25.143976   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:25.643599   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:26.079328   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:28.245000   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:26.803646   70755 crio.go:462] duration metric: took 1.346173385s to copy over tarball
	I0725 19:12:26.803746   70755 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 19:12:29.169640   70755 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.365864077s)
	I0725 19:12:29.169674   70755 crio.go:469] duration metric: took 2.366004293s to extract the tarball
	I0725 19:12:29.169684   70755 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 19:12:29.216135   70755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:12:29.258664   70755 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 19:12:29.258696   70755 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:12:29.258704   70755 kubeadm.go:934] updating node { 192.168.72.226 8443 v1.30.3 crio true true} ...
	I0725 19:12:29.258829   70755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-889508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0725 19:12:29.258935   70755 ssh_runner.go:195] Run: crio config
	I0725 19:12:29.303523   70755 cni.go:84] Creating CNI manager for "bridge"
	I0725 19:12:29.303553   70755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:12:29.303581   70755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.226 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-889508 NodeName:enable-default-cni-889508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:12:29.303758   70755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-889508"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:12:29.303838   70755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:12:29.314548   70755 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:12:29.314623   70755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:12:29.323467   70755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0725 19:12:29.339644   70755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:12:29.355394   70755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0725 19:12:29.371132   70755 ssh_runner.go:195] Run: grep 192.168.72.226	control-plane.minikube.internal$ /etc/hosts
	I0725 19:12:29.374586   70755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:12:29.385739   70755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:29.532960   70755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:12:29.551053   70755 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508 for IP: 192.168.72.226
	I0725 19:12:29.551078   70755 certs.go:194] generating shared ca certs ...
	I0725 19:12:29.551099   70755 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:29.551264   70755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 19:12:29.551321   70755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 19:12:29.551333   70755 certs.go:256] generating profile certs ...
	I0725 19:12:29.551399   70755 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.key
	I0725 19:12:29.551416   70755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.crt with IP's: []
	I0725 19:12:29.944443   70755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.crt ...
	I0725 19:12:29.944475   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.crt: {Name:mkf76fd81af8bffa8b663464b453e3a9fe2db5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:29.944696   70755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.key ...
	I0725 19:12:29.944717   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/client.key: {Name:mk2b8c29c7b42eb6ed296d199ad349519b3764d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:29.945932   70755 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key.e77c8f64
	I0725 19:12:29.945975   70755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt.e77c8f64 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.226]
	I0725 19:12:30.411323   70755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt.e77c8f64 ...
	I0725 19:12:30.411361   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt.e77c8f64: {Name:mkc843c75bc866e78d6c59a4af8c9eef9e7324d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:30.411558   70755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key.e77c8f64 ...
	I0725 19:12:30.411592   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key.e77c8f64: {Name:mkef552ae2e70a847958617d16a3099bde89b70a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:30.411724   70755 certs.go:381] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt.e77c8f64 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt
	I0725 19:12:30.411847   70755 certs.go:385] copying /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key.e77c8f64 -> /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key
	I0725 19:12:30.411911   70755 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.key
	I0725 19:12:30.411927   70755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.crt with IP's: []
	I0725 19:12:26.144041   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:26.644226   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:27.143470   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:27.643609   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:28.143452   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:28.644375   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:29.143498   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:29.643551   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:30.143470   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:30.643446   69429 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:31.207776   69429 kubeadm.go:1113] duration metric: took 12.741407588s to wait for elevateKubeSystemPrivileges
	I0725 19:12:31.207820   69429 kubeadm.go:394] duration metric: took 24.616578466s to StartCluster
	I0725 19:12:31.207844   69429 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:31.207927   69429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:12:31.210271   69429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:31.254306   69429 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:12:31.254350   69429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:12:31.254427   69429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:12:31.254507   69429 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-889508"
	I0725 19:12:31.254524   69429 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-889508"
	I0725 19:12:31.254537   69429 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-889508"
	I0725 19:12:31.254568   69429 host.go:66] Checking if "custom-flannel-889508" exists ...
	I0725 19:12:31.254569   69429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-889508"
	I0725 19:12:31.254582   69429 config.go:182] Loaded profile config "custom-flannel-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:12:31.255057   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:31.255071   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:31.255093   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:31.255108   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:31.273973   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40407
	I0725 19:12:31.274446   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:31.274851   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0725 19:12:31.275392   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:31.275513   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:12:31.275541   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:31.275867   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:12:31.275890   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:31.275939   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:31.276146   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetState
	I0725 19:12:31.276219   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:31.276773   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:31.276813   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:31.299081   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0725 19:12:31.299724   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:31.300258   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:12:31.300284   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:31.300714   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:31.300941   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetState
	I0725 19:12:31.302719   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:12:31.317158   69429 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-889508"
	I0725 19:12:31.320543   69429 host.go:66] Checking if "custom-flannel-889508" exists ...
	I0725 19:12:31.320952   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:31.321007   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:31.337425   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0725 19:12:31.337912   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:31.338440   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:12:31.338462   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:31.338838   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:31.339315   69429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:31.339364   69429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:31.356056   69429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0725 19:12:31.356483   69429 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:31.356981   69429 main.go:141] libmachine: Using API Version  1
	I0725 19:12:31.357003   69429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:31.357332   69429 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:31.357515   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetState
	I0725 19:12:31.359301   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .DriverName
	I0725 19:12:31.359517   69429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:31.359534   69429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:12:31.359556   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:12:31.362588   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:31.363026   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:12:31.363060   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:31.363231   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:12:31.363592   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:12:31.363772   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:12:31.363900   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:12:31.384613   69429 out.go:177] * Verifying Kubernetes components...
	I0725 19:12:31.461696   69429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:31.568616   69429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:12:31.634992   69429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:31.699034   69429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:31.699062   69429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:12:31.699087   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHHostname
	I0725 19:12:31.702824   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:31.703271   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:5a:79", ip: ""} in network mk-custom-flannel-889508: {Iface:virbr3 ExpiryTime:2024-07-25 20:11:49 +0000 UTC Type:0 Mac:52:54:00:21:5a:79 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:custom-flannel-889508 Clientid:01:52:54:00:21:5a:79}
	I0725 19:12:31.703305   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | domain custom-flannel-889508 has defined IP address 192.168.39.248 and MAC address 52:54:00:21:5a:79 in network mk-custom-flannel-889508
	I0725 19:12:31.703513   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHPort
	I0725 19:12:31.703753   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHKeyPath
	I0725 19:12:31.703945   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .GetSSHUsername
	I0725 19:12:31.704207   69429 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/custom-flannel-889508/id_rsa Username:docker}
	I0725 19:12:31.901144   69429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:32.220049   69429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:12:33.162774   69429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.701037506s)
	I0725 19:12:33.162829   69429 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:33.162842   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Close
	I0725 19:12:33.162837   69429 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.527808779s)
	I0725 19:12:33.162903   69429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:12:33.163216   69429 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:33.163231   69429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:33.163241   69429 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:33.163249   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Close
	I0725 19:12:33.163476   69429 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:33.163491   69429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:33.640127   69429 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:33.640159   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Close
	I0725 19:12:33.640609   69429 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:33.640782   69429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:33.640755   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Closing plugin on server side
	I0725 19:12:33.714687   69429 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.49459552s)
	I0725 19:12:33.714787   69429 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0725 19:12:33.714790   69429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.81360953s)
	I0725 19:12:33.714938   69429 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:33.714956   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Close
	I0725 19:12:33.715244   69429 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:33.715263   69429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:33.715266   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Closing plugin on server side
	I0725 19:12:33.715272   69429 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:33.715315   69429 main.go:141] libmachine: (custom-flannel-889508) Calling .Close
	I0725 19:12:33.715731   69429 main.go:141] libmachine: (custom-flannel-889508) DBG | Closing plugin on server side
	I0725 19:12:33.715796   69429 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:33.715818   69429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:33.716386   69429 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-889508" to be "Ready" ...
	I0725 19:12:33.717482   69429 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 19:12:30.578017   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:32.601358   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:34.607408   68507 pod_ready.go:102] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:30.514741   70755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.crt ...
	I0725 19:12:30.514771   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.crt: {Name:mk7ccea4ef6e02271fb83c2a26a2042115ecdb80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:30.514925   70755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.key ...
	I0725 19:12:30.514937   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.key: {Name:mkfa10d56f7beebf44952bec970f2a430271af4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:30.515095   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 19:12:30.515131   70755 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 19:12:30.515138   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 19:12:30.515157   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 19:12:30.515203   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:12:30.515224   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 19:12:30.515259   70755 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 19:12:30.515796   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:12:30.548551   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 19:12:30.573717   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:12:30.598401   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:12:30.621199   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0725 19:12:30.650435   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 19:12:30.678261   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:12:30.705336   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/enable-default-cni-889508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 19:12:30.777118   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 19:12:30.801414   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 19:12:30.825011   70755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:12:30.847848   70755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:12:30.864255   70755 ssh_runner.go:195] Run: openssl version
	I0725 19:12:30.870086   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 19:12:30.881036   70755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 19:12:30.885528   70755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 19:12:30.885584   70755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 19:12:30.891300   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:12:30.904912   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:12:30.916897   70755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:30.922369   70755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:30.922429   70755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:12:30.928042   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:12:30.938424   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 19:12:30.950672   70755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 19:12:30.955460   70755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 19:12:30.955531   70755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 19:12:30.961693   70755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 19:12:30.975057   70755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:12:30.979110   70755 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:12:30.979168   70755 kubeadm.go:392] StartCluster: {Name:enable-default-cni-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.3 ClusterName:enable-default-cni-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:12:30.979265   70755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 19:12:30.979315   70755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:12:31.027445   70755 cri.go:89] found id: ""
	I0725 19:12:31.027520   70755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:12:31.037240   70755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:12:31.046860   70755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:12:31.057602   70755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:12:31.057626   70755 kubeadm.go:157] found existing configuration files:
	
	I0725 19:12:31.057665   70755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:12:31.066818   70755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:12:31.066879   70755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:12:31.076459   70755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:12:31.085072   70755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:12:31.085135   70755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:12:31.093592   70755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:12:31.101990   70755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:12:31.102046   70755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:12:31.110913   70755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:12:31.119809   70755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:12:31.119885   70755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:12:31.128763   70755 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 19:12:31.345135   70755 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:12:33.718598   69429 addons.go:510] duration metric: took 2.464191098s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 19:12:34.223211   69429 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-889508" context rescaled to 1 replicas
	I0725 19:12:35.720790   69429 node_ready.go:53] node "custom-flannel-889508" has status "Ready":"False"
	I0725 19:12:36.078547   68507 pod_ready.go:92] pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.078571   68507 pod_ready.go:81] duration metric: took 21.507364012s for pod "calico-kube-controllers-564985c589-t4bs7" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.078591   68507 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-2g9sq" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.084006   68507 pod_ready.go:92] pod "calico-node-2g9sq" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.084033   68507 pod_ready.go:81] duration metric: took 5.433727ms for pod "calico-node-2g9sq" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.084044   68507 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-g7q5z" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.088700   68507 pod_ready.go:92] pod "coredns-7db6d8ff4d-g7q5z" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.088725   68507 pod_ready.go:81] duration metric: took 4.672641ms for pod "coredns-7db6d8ff4d-g7q5z" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.088739   68507 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.092863   68507 pod_ready.go:92] pod "etcd-calico-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.092885   68507 pod_ready.go:81] duration metric: took 4.137923ms for pod "etcd-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.092896   68507 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.097025   68507 pod_ready.go:92] pod "kube-apiserver-calico-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.097044   68507 pod_ready.go:81] duration metric: took 4.140033ms for pod "kube-apiserver-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.097052   68507 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.475812   68507 pod_ready.go:92] pod "kube-controller-manager-calico-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.475840   68507 pod_ready.go:81] duration metric: took 378.780943ms for pod "kube-controller-manager-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.475852   68507 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-ths42" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.875741   68507 pod_ready.go:92] pod "kube-proxy-ths42" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:36.875765   68507 pod_ready.go:81] duration metric: took 399.907223ms for pod "kube-proxy-ths42" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:36.875775   68507 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:37.276546   68507 pod_ready.go:92] pod "kube-scheduler-calico-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:37.276574   68507 pod_ready.go:81] duration metric: took 400.792439ms for pod "kube-scheduler-calico-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:37.276584   68507 pod_ready.go:38] duration metric: took 22.726416171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:12:37.276597   68507 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:12:37.276645   68507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:12:37.298245   68507 api_server.go:72] duration metric: took 32.461525498s to wait for apiserver process to appear ...
	I0725 19:12:37.298273   68507 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:12:37.298301   68507 api_server.go:253] Checking apiserver healthz at https://192.168.50.187:8443/healthz ...
	I0725 19:12:37.305132   68507 api_server.go:279] https://192.168.50.187:8443/healthz returned 200:
	ok
	I0725 19:12:37.306223   68507 api_server.go:141] control plane version: v1.30.3
	I0725 19:12:37.306246   68507 api_server.go:131] duration metric: took 7.966771ms to wait for apiserver health ...
	I0725 19:12:37.306254   68507 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 19:12:37.482775   68507 system_pods.go:59] 9 kube-system pods found
	I0725 19:12:37.482821   68507 system_pods.go:61] "calico-kube-controllers-564985c589-t4bs7" [b392fd0e-de22-496d-8e38-4d4482879d41] Running
	I0725 19:12:37.482832   68507 system_pods.go:61] "calico-node-2g9sq" [5a2206ae-f255-4eb2-a83b-6e05a9ab6ca4] Running
	I0725 19:12:37.482838   68507 system_pods.go:61] "coredns-7db6d8ff4d-g7q5z" [12641b3f-7fbe-4499-ac4b-2d7acb696f79] Running
	I0725 19:12:37.482843   68507 system_pods.go:61] "etcd-calico-889508" [822b50dd-f5e2-4b42-828a-5cb588092122] Running
	I0725 19:12:37.482849   68507 system_pods.go:61] "kube-apiserver-calico-889508" [d32df2ae-6be2-45ec-b12c-e8b38a8167c7] Running
	I0725 19:12:37.482855   68507 system_pods.go:61] "kube-controller-manager-calico-889508" [4a083143-d871-43c0-8006-44d702366fab] Running
	I0725 19:12:37.482859   68507 system_pods.go:61] "kube-proxy-ths42" [9126b398-17a4-4f35-953d-4e9f83e1d703] Running
	I0725 19:12:37.482868   68507 system_pods.go:61] "kube-scheduler-calico-889508" [65f0ec1f-ae94-4c7a-a50a-c1baeb9e40cb] Running
	I0725 19:12:37.482872   68507 system_pods.go:61] "storage-provisioner" [0b3b02ac-8ba0-4ecc-8dc1-914fb10936a5] Running
	I0725 19:12:37.482883   68507 system_pods.go:74] duration metric: took 176.622846ms to wait for pod list to return data ...
	I0725 19:12:37.482896   68507 default_sa.go:34] waiting for default service account to be created ...
	I0725 19:12:37.675146   68507 default_sa.go:45] found service account: "default"
	I0725 19:12:37.675185   68507 default_sa.go:55] duration metric: took 192.270922ms for default service account to be created ...
	I0725 19:12:37.675196   68507 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 19:12:37.879728   68507 system_pods.go:86] 9 kube-system pods found
	I0725 19:12:37.879754   68507 system_pods.go:89] "calico-kube-controllers-564985c589-t4bs7" [b392fd0e-de22-496d-8e38-4d4482879d41] Running
	I0725 19:12:37.879759   68507 system_pods.go:89] "calico-node-2g9sq" [5a2206ae-f255-4eb2-a83b-6e05a9ab6ca4] Running
	I0725 19:12:37.879763   68507 system_pods.go:89] "coredns-7db6d8ff4d-g7q5z" [12641b3f-7fbe-4499-ac4b-2d7acb696f79] Running
	I0725 19:12:37.879767   68507 system_pods.go:89] "etcd-calico-889508" [822b50dd-f5e2-4b42-828a-5cb588092122] Running
	I0725 19:12:37.879771   68507 system_pods.go:89] "kube-apiserver-calico-889508" [d32df2ae-6be2-45ec-b12c-e8b38a8167c7] Running
	I0725 19:12:37.879775   68507 system_pods.go:89] "kube-controller-manager-calico-889508" [4a083143-d871-43c0-8006-44d702366fab] Running
	I0725 19:12:37.879780   68507 system_pods.go:89] "kube-proxy-ths42" [9126b398-17a4-4f35-953d-4e9f83e1d703] Running
	I0725 19:12:37.879783   68507 system_pods.go:89] "kube-scheduler-calico-889508" [65f0ec1f-ae94-4c7a-a50a-c1baeb9e40cb] Running
	I0725 19:12:37.879787   68507 system_pods.go:89] "storage-provisioner" [0b3b02ac-8ba0-4ecc-8dc1-914fb10936a5] Running
	I0725 19:12:37.879793   68507 system_pods.go:126] duration metric: took 204.591967ms to wait for k8s-apps to be running ...
	I0725 19:12:37.879799   68507 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 19:12:37.879839   68507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:12:37.895610   68507 system_svc.go:56] duration metric: took 15.8ms WaitForService to wait for kubelet
	I0725 19:12:37.895643   68507 kubeadm.go:582] duration metric: took 33.058936972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:12:37.895669   68507 node_conditions.go:102] verifying NodePressure condition ...
	I0725 19:12:38.075542   68507 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 19:12:38.075574   68507 node_conditions.go:123] node cpu capacity is 2
	I0725 19:12:38.075590   68507 node_conditions.go:105] duration metric: took 179.914414ms to run NodePressure ...
	I0725 19:12:38.075601   68507 start.go:241] waiting for startup goroutines ...
	I0725 19:12:38.075610   68507 start.go:246] waiting for cluster config update ...
	I0725 19:12:38.075623   68507 start.go:255] writing updated cluster config ...
	I0725 19:12:38.075907   68507 ssh_runner.go:195] Run: rm -f paused
	I0725 19:12:38.139231   68507 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 19:12:38.142179   68507 out.go:177] * Done! kubectl is now configured to use "calico-889508" cluster and "default" namespace by default
	I0725 19:12:38.220608   69429 node_ready.go:53] node "custom-flannel-889508" has status "Ready":"False"
	I0725 19:12:40.719571   69429 node_ready.go:53] node "custom-flannel-889508" has status "Ready":"False"
	I0725 19:12:42.926249   70755 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:12:42.926322   70755 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:12:42.926421   70755 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:12:42.926589   70755 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:12:42.926697   70755 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:12:42.926774   70755 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:12:42.928282   70755 out.go:204]   - Generating certificates and keys ...
	I0725 19:12:42.928418   70755 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:12:42.928494   70755 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:12:42.928593   70755 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:12:42.928675   70755 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:12:42.928749   70755 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:12:42.928808   70755 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:12:42.928893   70755 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:12:42.929055   70755 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-889508 localhost] and IPs [192.168.72.226 127.0.0.1 ::1]
	I0725 19:12:42.929122   70755 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:12:42.929281   70755 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-889508 localhost] and IPs [192.168.72.226 127.0.0.1 ::1]
	I0725 19:12:42.929366   70755 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:12:42.929472   70755 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:12:42.929548   70755 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:12:42.929627   70755 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:12:42.929669   70755 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:12:42.929712   70755 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:12:42.929754   70755 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:12:42.929817   70755 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:12:42.929867   70755 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:12:42.929951   70755 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:12:42.930025   70755 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:12:42.931328   70755 out.go:204]   - Booting up control plane ...
	I0725 19:12:42.931427   70755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:12:42.931514   70755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:12:42.931607   70755 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:12:42.931754   70755 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:12:42.931865   70755 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:12:42.931900   70755 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:12:42.932051   70755 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:12:42.932163   70755 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:12:42.932234   70755 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.214992ms
	I0725 19:12:42.932361   70755 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:12:42.932447   70755 kubeadm.go:310] [api-check] The API server is healthy after 5.002453012s
	I0725 19:12:42.932604   70755 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:12:42.932755   70755 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:12:42.932803   70755 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:12:42.933055   70755 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-889508 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:12:42.933138   70755 kubeadm.go:310] [bootstrap-token] Using token: k6p4ck.f713bxtgvyjorr24
	I0725 19:12:42.934585   70755 out.go:204]   - Configuring RBAC rules ...
	I0725 19:12:42.934718   70755 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:12:42.934821   70755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:12:42.934979   70755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:12:42.935131   70755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:12:42.935280   70755 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:12:42.935389   70755 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:12:42.935513   70755 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:12:42.935573   70755 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:12:42.935642   70755 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:12:42.935650   70755 kubeadm.go:310] 
	I0725 19:12:42.935721   70755 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:12:42.935729   70755 kubeadm.go:310] 
	I0725 19:12:42.935830   70755 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:12:42.935839   70755 kubeadm.go:310] 
	I0725 19:12:42.935869   70755 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:12:42.935941   70755 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:12:42.936005   70755 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:12:42.936013   70755 kubeadm.go:310] 
	I0725 19:12:42.936078   70755 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:12:42.936086   70755 kubeadm.go:310] 
	I0725 19:12:42.936143   70755 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:12:42.936151   70755 kubeadm.go:310] 
	I0725 19:12:42.936211   70755 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:12:42.936300   70755 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:12:42.936398   70755 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:12:42.936408   70755 kubeadm.go:310] 
	I0725 19:12:42.936516   70755 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:12:42.936614   70755 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:12:42.936623   70755 kubeadm.go:310] 
	I0725 19:12:42.936722   70755 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k6p4ck.f713bxtgvyjorr24 \
	I0725 19:12:42.936846   70755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 \
	I0725 19:12:42.936875   70755 kubeadm.go:310] 	--control-plane 
	I0725 19:12:42.936882   70755 kubeadm.go:310] 
	I0725 19:12:42.936983   70755 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:12:42.936992   70755 kubeadm.go:310] 
	I0725 19:12:42.937093   70755 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k6p4ck.f713bxtgvyjorr24 \
	I0725 19:12:42.937227   70755 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d31d36976bef3213bd84e5fcdeb11cbe5a722f08578572dd8bebba4538fc8244 
	I0725 19:12:42.937242   70755 cni.go:84] Creating CNI manager for "bridge"
	I0725 19:12:42.938564   70755 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 19:12:42.939757   70755 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 19:12:42.952289   70755 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 19:12:42.972644   70755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:12:42.972717   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:42.972737   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-889508 minikube.k8s.io/updated_at=2024_07_25T19_12_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=enable-default-cni-889508 minikube.k8s.io/primary=true
	I0725 19:12:43.119090   70755 ops.go:34] apiserver oom_adj: -16
	I0725 19:12:43.119270   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:43.620148   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:44.120097   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:44.620152   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:45.119386   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:41.725610   69429 node_ready.go:49] node "custom-flannel-889508" has status "Ready":"True"
	I0725 19:12:41.725635   69429 node_ready.go:38] duration metric: took 8.009223606s for node "custom-flannel-889508" to be "Ready" ...
	I0725 19:12:41.725643   69429 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:12:41.732363   69429 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:43.740013   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:45.620127   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:46.120222   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:46.619780   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:47.119942   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:47.619485   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:48.119909   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:48.619761   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:49.119720   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:49.619923   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:50.119931   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:46.239117   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:48.739589   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:50.620218   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:51.119875   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:51.619706   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:52.120143   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:52.619880   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:53.119574   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:53.620229   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:54.119582   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:54.619806   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:55.119942   70755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:12:55.228225   70755 kubeadm.go:1113] duration metric: took 12.255574633s to wait for elevateKubeSystemPrivileges
	I0725 19:12:55.228257   70755 kubeadm.go:394] duration metric: took 24.249094249s to StartCluster
	I0725 19:12:55.228278   70755 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:55.228382   70755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:12:55.230773   70755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:12:55.231004   70755 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:12:55.231017   70755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:12:55.231028   70755 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:12:55.231104   70755 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-889508"
	I0725 19:12:55.231134   70755 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-889508"
	I0725 19:12:55.231141   70755 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-889508"
	I0725 19:12:55.231159   70755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-889508"
	I0725 19:12:55.231175   70755 host.go:66] Checking if "enable-default-cni-889508" exists ...
	I0725 19:12:55.231257   70755 config.go:182] Loaded profile config "enable-default-cni-889508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:12:55.231618   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:55.231618   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:55.231665   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:55.231668   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:55.232632   70755 out.go:177] * Verifying Kubernetes components...
	I0725 19:12:55.234008   70755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:12:55.248658   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0725 19:12:55.249098   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:55.249524   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0725 19:12:55.249551   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:12:55.249573   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:55.249934   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:55.250022   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:55.250203   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetState
	I0725 19:12:55.250354   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:12:55.250366   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:55.250599   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:55.250949   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:55.250962   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:55.254045   70755 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-889508"
	I0725 19:12:55.254083   70755 host.go:66] Checking if "enable-default-cni-889508" exists ...
	I0725 19:12:55.254439   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:55.254482   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:55.267118   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0725 19:12:55.267592   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:55.268083   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:12:55.268117   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:55.268539   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:55.268757   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetState
	I0725 19:12:55.270645   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:55.272349   70755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:12:55.273087   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I0725 19:12:55.273485   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:55.273487   70755 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:55.273550   70755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:12:55.273568   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:55.274519   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:12:55.274538   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:55.274989   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:55.275473   70755 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:12:55.275509   70755 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:12:55.276305   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:55.276734   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:55.276758   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:55.276936   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:55.277067   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:55.277156   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:55.277230   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:55.295773   70755 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0725 19:12:55.296238   70755 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:12:55.296774   70755 main.go:141] libmachine: Using API Version  1
	I0725 19:12:55.296801   70755 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:12:55.297147   70755 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:12:55.297366   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetState
	I0725 19:12:55.299002   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .DriverName
	I0725 19:12:55.299252   70755 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:55.299270   70755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:12:55.299293   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHHostname
	I0725 19:12:55.302472   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:55.302908   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:ef:2e", ip: ""} in network mk-enable-default-cni-889508: {Iface:virbr4 ExpiryTime:2024-07-25 20:12:15 +0000 UTC Type:0 Mac:52:54:00:27:ef:2e Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:enable-default-cni-889508 Clientid:01:52:54:00:27:ef:2e}
	I0725 19:12:55.302944   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | domain enable-default-cni-889508 has defined IP address 192.168.72.226 and MAC address 52:54:00:27:ef:2e in network mk-enable-default-cni-889508
	I0725 19:12:55.303187   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHPort
	I0725 19:12:55.303345   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHKeyPath
	I0725 19:12:55.303473   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .GetSSHUsername
	I0725 19:12:55.303620   70755 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/enable-default-cni-889508/id_rsa Username:docker}
	I0725 19:12:55.400221   70755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:12:55.451021   70755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:12:51.239931   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:53.739175   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:55.739495   69429 pod_ready.go:102] pod "coredns-7db6d8ff4d-vp6h8" in "kube-system" namespace has status "Ready":"False"
	I0725 19:12:55.611221   70755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:12:55.629238   70755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:12:55.816469   70755 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0725 19:12:55.818042   70755 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-889508" to be "Ready" ...
	I0725 19:12:55.830816   70755 node_ready.go:49] node "enable-default-cni-889508" has status "Ready":"True"
	I0725 19:12:55.830839   70755 node_ready.go:38] duration metric: took 12.77099ms for node "enable-default-cni-889508" to be "Ready" ...
	I0725 19:12:55.830850   70755 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:12:55.850578   70755 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.863385   70755 pod_ready.go:92] pod "etcd-enable-default-cni-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:55.863416   70755 pod_ready.go:81] duration metric: took 12.809446ms for pod "etcd-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.863430   70755 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.879372   70755 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:55.879397   70755 pod_ready.go:81] duration metric: took 15.958725ms for pod "kube-apiserver-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.879410   70755 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.892655   70755 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:55.892677   70755 pod_ready.go:81] duration metric: took 13.259296ms for pod "kube-controller-manager-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.892687   70755 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.912712   70755 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-889508" in "kube-system" namespace has status "Ready":"True"
	I0725 19:12:55.912734   70755 pod_ready.go:81] duration metric: took 20.040307ms for pod "kube-scheduler-enable-default-cni-889508" in "kube-system" namespace to be "Ready" ...
	I0725 19:12:55.912746   70755 pod_ready.go:38] duration metric: took 81.882761ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:12:55.912763   70755 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:12:55.912818   70755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:12:56.128511   70755 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:56.128539   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Close
	I0725 19:12:56.128568   70755 api_server.go:72] duration metric: took 897.532843ms to wait for apiserver process to appear ...
	I0725 19:12:56.128603   70755 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:12:56.128637   70755 api_server.go:253] Checking apiserver healthz at https://192.168.72.226:8443/healthz ...
	I0725 19:12:56.128726   70755 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:56.128748   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Close
	I0725 19:12:56.128810   70755 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:56.128816   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Closing plugin on server side
	I0725 19:12:56.128822   70755 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:56.128854   70755 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:56.128867   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Close
	I0725 19:12:56.129001   70755 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:56.129023   70755 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:56.129032   70755 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:56.129040   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Close
	I0725 19:12:56.129171   70755 main.go:141] libmachine: (enable-default-cni-889508) DBG | Closing plugin on server side
	I0725 19:12:56.129200   70755 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:56.129212   70755 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:56.129277   70755 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:56.129302   70755 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:56.136166   70755 api_server.go:279] https://192.168.72.226:8443/healthz returned 200:
	ok
	I0725 19:12:56.137173   70755 api_server.go:141] control plane version: v1.30.3
	I0725 19:12:56.137198   70755 api_server.go:131] duration metric: took 8.577385ms to wait for apiserver health ...
	I0725 19:12:56.137207   70755 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 19:12:56.161123   70755 system_pods.go:59] 6 kube-system pods found
	I0725 19:12:56.161149   70755 system_pods.go:61] "etcd-enable-default-cni-889508" [4d6f1543-fdb6-4b23-b962-ddf41956e076] Running
	I0725 19:12:56.161155   70755 system_pods.go:61] "kube-apiserver-enable-default-cni-889508" [c7173340-d6c2-41a4-8950-fe80de3b7cc0] Running
	I0725 19:12:56.161159   70755 system_pods.go:61] "kube-controller-manager-enable-default-cni-889508" [39c5eb8d-3207-40f8-b5f4-c86992961c21] Running
	I0725 19:12:56.161166   70755 system_pods.go:61] "kube-proxy-q9vdf" [062160c5-1559-4ae4-96bc-9b76a23611c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 19:12:56.161170   70755 system_pods.go:61] "kube-scheduler-enable-default-cni-889508" [1889f7b0-cdf7-4545-b388-e2e4fd2322f0] Running
	I0725 19:12:56.161176   70755 system_pods.go:61] "storage-provisioner" [02448b95-67c9-47a3-af7b-e80b6b8e6745] Pending
	I0725 19:12:56.161182   70755 system_pods.go:74] duration metric: took 23.968895ms to wait for pod list to return data ...
	I0725 19:12:56.161191   70755 default_sa.go:34] waiting for default service account to be created ...
	I0725 19:12:56.165496   70755 main.go:141] libmachine: Making call to close driver server
	I0725 19:12:56.165514   70755 main.go:141] libmachine: (enable-default-cni-889508) Calling .Close
	I0725 19:12:56.165827   70755 main.go:141] libmachine: Successfully made call to close driver server
	I0725 19:12:56.165845   70755 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 19:12:56.167507   70755 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0725 19:12:56.168785   70755 addons.go:510] duration metric: took 937.751239ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0725 19:12:56.232754   70755 default_sa.go:45] found service account: "default"
	I0725 19:12:56.232787   70755 default_sa.go:55] duration metric: took 71.586376ms for default service account to be created ...
	I0725 19:12:56.232798   70755 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 19:12:56.326111   70755 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-889508" context rescaled to 1 replicas
	I0725 19:12:56.425167   70755 system_pods.go:86] 8 kube-system pods found
	I0725 19:12:56.425195   70755 system_pods.go:89] "coredns-7db6d8ff4d-2slbz" [281e835c-643f-42e3-a2ce-16b15436f3b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.425202   70755 system_pods.go:89] "coredns-7db6d8ff4d-hwszp" [e5b5eb58-5b80-4cf5-932c-098f04951a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.425210   70755 system_pods.go:89] "etcd-enable-default-cni-889508" [4d6f1543-fdb6-4b23-b962-ddf41956e076] Running
	I0725 19:12:56.425217   70755 system_pods.go:89] "kube-apiserver-enable-default-cni-889508" [c7173340-d6c2-41a4-8950-fe80de3b7cc0] Running
	I0725 19:12:56.425223   70755 system_pods.go:89] "kube-controller-manager-enable-default-cni-889508" [39c5eb8d-3207-40f8-b5f4-c86992961c21] Running
	I0725 19:12:56.425231   70755 system_pods.go:89] "kube-proxy-q9vdf" [062160c5-1559-4ae4-96bc-9b76a23611c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 19:12:56.425237   70755 system_pods.go:89] "kube-scheduler-enable-default-cni-889508" [1889f7b0-cdf7-4545-b388-e2e4fd2322f0] Running
	I0725 19:12:56.425247   70755 system_pods.go:89] "storage-provisioner" [02448b95-67c9-47a3-af7b-e80b6b8e6745] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 19:12:56.425266   70755 retry.go:31] will retry after 220.167714ms: missing components: kube-dns, kube-proxy
	I0725 19:12:56.652055   70755 system_pods.go:86] 8 kube-system pods found
	I0725 19:12:56.652085   70755 system_pods.go:89] "coredns-7db6d8ff4d-2slbz" [281e835c-643f-42e3-a2ce-16b15436f3b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.652094   70755 system_pods.go:89] "coredns-7db6d8ff4d-hwszp" [e5b5eb58-5b80-4cf5-932c-098f04951a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.652103   70755 system_pods.go:89] "etcd-enable-default-cni-889508" [4d6f1543-fdb6-4b23-b962-ddf41956e076] Running
	I0725 19:12:56.652112   70755 system_pods.go:89] "kube-apiserver-enable-default-cni-889508" [c7173340-d6c2-41a4-8950-fe80de3b7cc0] Running
	I0725 19:12:56.652120   70755 system_pods.go:89] "kube-controller-manager-enable-default-cni-889508" [39c5eb8d-3207-40f8-b5f4-c86992961c21] Running
	I0725 19:12:56.652129   70755 system_pods.go:89] "kube-proxy-q9vdf" [062160c5-1559-4ae4-96bc-9b76a23611c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 19:12:56.652140   70755 system_pods.go:89] "kube-scheduler-enable-default-cni-889508" [1889f7b0-cdf7-4545-b388-e2e4fd2322f0] Running
	I0725 19:12:56.652149   70755 system_pods.go:89] "storage-provisioner" [02448b95-67c9-47a3-af7b-e80b6b8e6745] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 19:12:56.652168   70755 retry.go:31] will retry after 321.762819ms: missing components: kube-dns, kube-proxy
	I0725 19:12:56.982267   70755 system_pods.go:86] 8 kube-system pods found
	I0725 19:12:56.982298   70755 system_pods.go:89] "coredns-7db6d8ff4d-2slbz" [281e835c-643f-42e3-a2ce-16b15436f3b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.982305   70755 system_pods.go:89] "coredns-7db6d8ff4d-hwszp" [e5b5eb58-5b80-4cf5-932c-098f04951a2a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:56.982311   70755 system_pods.go:89] "etcd-enable-default-cni-889508" [4d6f1543-fdb6-4b23-b962-ddf41956e076] Running
	I0725 19:12:56.982317   70755 system_pods.go:89] "kube-apiserver-enable-default-cni-889508" [c7173340-d6c2-41a4-8950-fe80de3b7cc0] Running
	I0725 19:12:56.982323   70755 system_pods.go:89] "kube-controller-manager-enable-default-cni-889508" [39c5eb8d-3207-40f8-b5f4-c86992961c21] Running
	I0725 19:12:56.982330   70755 system_pods.go:89] "kube-proxy-q9vdf" [062160c5-1559-4ae4-96bc-9b76a23611c8] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 19:12:56.982337   70755 system_pods.go:89] "kube-scheduler-enable-default-cni-889508" [1889f7b0-cdf7-4545-b388-e2e4fd2322f0] Running
	I0725 19:12:56.982346   70755 system_pods.go:89] "storage-provisioner" [02448b95-67c9-47a3-af7b-e80b6b8e6745] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 19:12:56.982367   70755 retry.go:31] will retry after 480.449473ms: missing components: kube-dns, kube-proxy
	I0725 19:12:57.470844   70755 system_pods.go:86] 7 kube-system pods found
	I0725 19:12:57.470881   70755 system_pods.go:89] "coredns-7db6d8ff4d-2slbz" [281e835c-643f-42e3-a2ce-16b15436f3b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 19:12:57.470889   70755 system_pods.go:89] "etcd-enable-default-cni-889508" [4d6f1543-fdb6-4b23-b962-ddf41956e076] Running
	I0725 19:12:57.470900   70755 system_pods.go:89] "kube-apiserver-enable-default-cni-889508" [c7173340-d6c2-41a4-8950-fe80de3b7cc0] Running
	I0725 19:12:57.470907   70755 system_pods.go:89] "kube-controller-manager-enable-default-cni-889508" [39c5eb8d-3207-40f8-b5f4-c86992961c21] Running
	I0725 19:12:57.470913   70755 system_pods.go:89] "kube-proxy-q9vdf" [062160c5-1559-4ae4-96bc-9b76a23611c8] Running
	I0725 19:12:57.470919   70755 system_pods.go:89] "kube-scheduler-enable-default-cni-889508" [1889f7b0-cdf7-4545-b388-e2e4fd2322f0] Running
	I0725 19:12:57.470927   70755 system_pods.go:89] "storage-provisioner" [02448b95-67c9-47a3-af7b-e80b6b8e6745] Running
	I0725 19:12:57.470935   70755 system_pods.go:126] duration metric: took 1.23813099s to wait for k8s-apps to be running ...
	I0725 19:12:57.470943   70755 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 19:12:57.470986   70755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:12:57.489530   70755 system_svc.go:56] duration metric: took 18.57654ms WaitForService to wait for kubelet
	I0725 19:12:57.489569   70755 kubeadm.go:582] duration metric: took 2.258535455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:12:57.489592   70755 node_conditions.go:102] verifying NodePressure condition ...
	I0725 19:12:57.492203   70755 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 19:12:57.492234   70755 node_conditions.go:123] node cpu capacity is 2
	I0725 19:12:57.492244   70755 node_conditions.go:105] duration metric: took 2.646002ms to run NodePressure ...
	I0725 19:12:57.492255   70755 start.go:241] waiting for startup goroutines ...
	I0725 19:12:57.492264   70755 start.go:246] waiting for cluster config update ...
	I0725 19:12:57.492277   70755 start.go:255] writing updated cluster config ...
	I0725 19:12:57.492561   70755 ssh_runner.go:195] Run: rm -f paused
	I0725 19:12:57.543698   70755 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 19:12:57.545238   70755 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-889508" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.650870245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934778650840488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c11d2e5-e028-46af-a1a2-2bf928dbc1ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.651614223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69f589e7-0726-45f4-86cc-b72d9d9a158b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.651696396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69f589e7-0726-45f4-86cc-b72d9d9a158b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.651884465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69f589e7-0726-45f4-86cc-b72d9d9a158b name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.691825575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06942601-3685-4b2d-82c9-8520aac0e96c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.691937167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06942601-3685-4b2d-82c9-8520aac0e96c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.693724826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9a9a625-5d96-4b2b-8e02-2d3ea3da93e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.694533175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934778694494297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9a9a625-5d96-4b2b-8e02-2d3ea3da93e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.695525607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f580e26-0a17-4260-9bf2-6ab84648618c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.695596420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f580e26-0a17-4260-9bf2-6ab84648618c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.695866488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f580e26-0a17-4260-9bf2-6ab84648618c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.736534425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b484cb5-51a8-4c9f-8f64-be8adf72944c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.736607783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b484cb5-51a8-4c9f-8f64-be8adf72944c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.738034414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74582446-8cb6-4c1c-95c5-eac403d1be36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.738403801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934778738381154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74582446-8cb6-4c1c-95c5-eac403d1be36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.739029811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccd9750b-97d5-4d80-bf94-55b691580f54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.739117030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccd9750b-97d5-4d80-bf94-55b691580f54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.739409708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccd9750b-97d5-4d80-bf94-55b691580f54 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.773769607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d73346c-591b-43a3-87c2-0cd89f9748eb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.773857426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d73346c-591b-43a3-87c2-0cd89f9748eb name=/runtime.v1.RuntimeService/Version
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.775309552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af6ba3ed-fc8f-4380-bb7e-a4e4669fb1c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.775881824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934778775854373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af6ba3ed-fc8f-4380-bb7e-a4e4669fb1c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.776359875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0beb081b-2377-4634-b6d9-fc8d90a97f7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.776507699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0beb081b-2377-4634-b6d9-fc8d90a97f7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:12:58 embed-certs-646344 crio[719]: time="2024-07-25 19:12:58.776777188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933458358636625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13736cd3f5220b619b6593285830274856c0003f93f2e807567c0b5767c36c4,PodSandboxId:7e0fd69172ec750561f59555c7adf57f757763c8e417b62e43f5d6e5af792ddf,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933438518877123,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a7da430-e23b-4464-81b8-46671459aca5,},Annotations:map[string]string{io.kubernetes.container.hash: 830533e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80,PodSandboxId:4e428ebdbbe18502cedb32c6256d2e59ea7163947d4dfacd5df792f2a6c9b148,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933435258842719,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-89vvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af4ee327-2b83-4102-aeac-9f2285355345,},Annotations:map[string]string{io.kubernetes.container.hash: 5a287f46,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354,PodSandboxId:faf79030adb2e04ac4a8ca6f1c6322a2bc4c3d38b66f916689c99859e3d5edec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933427524785441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3d10d635-9457-42c3-9183-abc4a7205c48,},Annotations:map[string]string{io.kubernetes.container.hash: c479e5ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb,PodSandboxId:015513a6da5f81cc996a4690257f67c1dcef91f659a930a1506b7451e1202c36,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721933427583287463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xk2lq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d74b42c-16cd-4714-803b-129e1d2ec
722,},Annotations:map[string]string{io.kubernetes.container.hash: 611116f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3,PodSandboxId:bcbae76557f034a99104ad30d29dd4f98df35efcf8e16486d2b48051dff70808,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721933423327806526,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac13445217caad08c5c6918f3197267,},Annota
tions:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4,PodSandboxId:252d2fc7d92b9292e3d60c62c3665fc8495427deee52a4a7c16fa22fbe2e0028,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721933423320089038,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79cb3377fd6b1c6fe8ec6cfba0898fdf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 41461ede,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef,PodSandboxId:ebb38f4fbb2b0f57f29ddfdab7b93face599ee1a87942eced694c51d04812a0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721933423323005132,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b972c23d655d4ba9bf529d7046ab3fa,},Annotations:map[string]string{io.
kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c,PodSandboxId:38d2459c446799447c1324956aef16ee54786ec5ac27eb96f600e1b2b6b7ecac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721933423318832290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-646344,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac6d37f2b02da78cf40957bea7f3d5f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: f51f1457,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0beb081b-2377-4634-b6d9-fc8d90a97f7c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd45387197a71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Running             storage-provisioner       2                   faf79030adb2e       storage-provisioner
	f13736cd3f522       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   7e0fd69172ec7       busybox
	e265ce86dc50d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   4e428ebdbbe18       coredns-7db6d8ff4d-89vvx
	3396bd8e6a955       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      22 minutes ago      Running             kube-proxy                1                   015513a6da5f8       kube-proxy-xk2lq
	e75aba803f380       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   faf79030adb2e       storage-provisioner
	980f1cafbf9df       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      22 minutes ago      Running             kube-scheduler            1                   bcbae76557f03       kube-scheduler-embed-certs-646344
	a057db9df5d79       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      22 minutes ago      Running             kube-controller-manager   1                   ebb38f4fbb2b0       kube-controller-manager-embed-certs-646344
	c4e8d2e70adcf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   252d2fc7d92b9       etcd-embed-certs-646344
	e29758ae5e857       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      22 minutes ago      Running             kube-apiserver            1                   38d2459c44679       kube-apiserver-embed-certs-646344
	
	
	==> coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35265 - 23403 "HINFO IN 8076321064129470149.2907509352587689521. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010791408s
	
	
	==> describe nodes <==
	Name:               embed-certs-646344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-646344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=embed-certs-646344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:44:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-646344
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:12:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:11:23 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:11:23 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:11:23 +0000   Thu, 25 Jul 2024 18:44:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:11:23 +0000   Thu, 25 Jul 2024 18:50:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.133
	  Hostname:    embed-certs-646344
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e5df9354e56484d8dfebe496d944239
	  System UUID:                8e5df935-4e56-484d-8dfe-be496d944239
	  Boot ID:                    f262b540-66c0-40e8-9836-cc83f8c1974f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-89vvx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                 etcd-embed-certs-646344                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-646344             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-646344    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-xk2lq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-embed-certs-646344             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-4gcts               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node embed-certs-646344 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                node-controller  Node embed-certs-646344 event: Registered Node embed-certs-646344 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-646344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-646344 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-646344 event: Registered Node embed-certs-646344 in Controller
	
	
	==> dmesg <==
	[Jul25 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056648] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038514] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.944685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.891438] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.449935] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.051698] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.063508] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058787] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.201858] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.110571] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.269119] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.212361] systemd-fstab-generator[801]: Ignoring "noauto" option for root device
	[  +2.409705] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +0.063408] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.521268] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.445639] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +3.280783] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.342513] kauditd_printk_skb: 35 callbacks suppressed
	[ +19.825225] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] <==
	{"level":"info","ts":"2024-07-25T19:00:25.179801Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-07-25T19:00:25.189173Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":829,"took":"8.824944ms","hash":1388300861,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2121728,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-25T19:00:25.189259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1388300861,"revision":829,"compact-revision":-1}
	{"level":"info","ts":"2024-07-25T19:05:25.186717Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1072}
	{"level":"info","ts":"2024-07-25T19:05:25.190499Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1072,"took":"3.286147ms","hash":3111945681,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1130496,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-07-25T19:05:25.190574Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3111945681,"revision":1072,"compact-revision":829}
	{"level":"warn","ts":"2024-07-25T19:09:43.533856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.43226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9180465705397818957 > lease_revoke:<id:7f6790eb3ae49600>","response":"size:28"}
	{"level":"info","ts":"2024-07-25T19:09:43.666341Z","caller":"traceutil/trace.go:171","msg":"trace[236746191] linearizableReadLoop","detail":"{readStateIndex:1797; appliedIndex:1796; }","duration":"119.650946ms","start":"2024-07-25T19:09:43.5466Z","end":"2024-07-25T19:09:43.666251Z","steps":["trace[236746191] 'read index received'  (duration: 119.429157ms)","trace[236746191] 'applied index is now lower than readState.Index'  (duration: 221.017µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T19:09:43.666583Z","caller":"traceutil/trace.go:171","msg":"trace[888893667] transaction","detail":"{read_only:false; response_revision:1525; number_of_response:1; }","duration":"128.528492ms","start":"2024-07-25T19:09:43.538034Z","end":"2024-07-25T19:09:43.666563Z","steps":["trace[888893667] 'process raft request'  (duration: 128.047223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:09:43.666731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.052545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-25T19:09:43.667208Z","caller":"traceutil/trace.go:171","msg":"trace[681303393] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1525; }","duration":"120.624367ms","start":"2024-07-25T19:09:43.546566Z","end":"2024-07-25T19:09:43.667191Z","steps":["trace[681303393] 'agreement among raft nodes before linearized reading'  (duration: 120.059256ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:10:10.191162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.697407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-25T19:10:10.191258Z","caller":"traceutil/trace.go:171","msg":"trace[469705159] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1545; }","duration":"227.847711ms","start":"2024-07-25T19:10:09.963395Z","end":"2024-07-25T19:10:10.191243Z","steps":["trace[469705159] 'count revisions from in-memory index tree'  (duration: 227.555497ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T19:10:10.348357Z","caller":"traceutil/trace.go:171","msg":"trace[792644356] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"129.107685ms","start":"2024-07-25T19:10:10.219231Z","end":"2024-07-25T19:10:10.348338Z","steps":["trace[792644356] 'process raft request'  (duration: 128.984364ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:10:13.330202Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.563439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9180465705397819099 > lease_revoke:<id:7f6790eb3ae49693>","response":"size:28"}
	{"level":"info","ts":"2024-07-25T19:10:25.193191Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1316}
	{"level":"info","ts":"2024-07-25T19:10:25.196318Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1316,"took":"2.889749ms","hash":2237958157,"current-db-size-bytes":2121728,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1110016,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-07-25T19:10:25.196372Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2237958157,"revision":1316,"compact-revision":1072}
	{"level":"info","ts":"2024-07-25T19:11:42.017504Z","caller":"traceutil/trace.go:171","msg":"trace[1783985037] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"223.624252ms","start":"2024-07-25T19:11:41.793799Z","end":"2024-07-25T19:11:42.017423Z","steps":["trace[1783985037] 'process raft request'  (duration: 223.514357ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:11:43.471762Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.025207ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9180465705397819547 > lease_revoke:<id:7f6790eb3ae4984d>","response":"size:28"}
	{"level":"warn","ts":"2024-07-25T19:12:08.197278Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.627471ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9180465705397819666 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.133\" mod_revision:1634 > success:<request_put:<key:\"/registry/masterleases/192.168.61.133\" value_size:68 lease:9180465705397819664 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.133\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-25T19:12:08.197575Z","caller":"traceutil/trace.go:171","msg":"trace[912519218] transaction","detail":"{read_only:false; response_revision:1643; number_of_response:1; }","duration":"261.893285ms","start":"2024-07-25T19:12:07.935664Z","end":"2024-07-25T19:12:08.197557Z","steps":["trace[912519218] 'process raft request'  (duration: 130.104982ms)","trace[912519218] 'compare'  (duration: 130.48331ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-25T19:12:12.668018Z","caller":"traceutil/trace.go:171","msg":"trace[1184110740] transaction","detail":"{read_only:false; response_revision:1646; number_of_response:1; }","duration":"109.293017ms","start":"2024-07-25T19:12:12.558704Z","end":"2024-07-25T19:12:12.667997Z","steps":["trace[1184110740] 'process raft request'  (duration: 109.187977ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-25T19:12:31.047061Z","caller":"traceutil/trace.go:171","msg":"trace[1005390729] transaction","detail":"{read_only:false; response_revision:1662; number_of_response:1; }","duration":"291.445093ms","start":"2024-07-25T19:12:30.755597Z","end":"2024-07-25T19:12:31.047042Z","steps":["trace[1005390729] 'process raft request'  (duration: 291.341722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-25T19:12:32.998095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.414035ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9180465705397819786 > lease_revoke:<id:7f6790eb3ae4993d>","response":"size:28"}
	
	
	==> kernel <==
	 19:12:59 up 22 min,  0 users,  load average: 0.05, 0.10, 0.09
	Linux embed-certs-646344 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] <==
	I0725 19:06:27.610962       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:08:27.609282       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:08:27.609625       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:08:27.609661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:08:27.611484       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:08:27.611518       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:08:27.611526       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:10:26.613721       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:10:26.613842       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0725 19:10:27.614071       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:10:27.614256       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:10:27.614304       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:10:27.614201       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:10:27.614421       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:10:27.615640       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:11:27.615272       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:11:27.615560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0725 19:11:27.615604       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:11:27.616357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0725 19:11:27.616490       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:11:27.617656       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] <==
	I0725 19:07:10.751584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:07:40.246308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:07:40.758833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:10.250925       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:08:10.767838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:40.256510       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:08:40.775496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:09:10.262669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:09:10.784081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:09:40.269154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:09:40.793340       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:10:10.274710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:10:10.801244       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:10:40.279930       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:10:40.811068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:11:10.288895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:11:10.820576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:11:40.294278       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:11:40.828051       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:12:05.198716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="474.496µs"
	E0725 19:12:10.299585       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:12:10.839779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:12:19.191294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="160.097µs"
	E0725 19:12:40.305294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0725 19:12:40.849395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] <==
	I0725 18:50:27.803400       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:50:27.811844       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.133"]
	I0725 18:50:27.843093       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 18:50:27.843136       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:50:27.843151       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:50:27.845240       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:50:27.845573       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:50:27.845597       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:27.846781       1 config.go:192] "Starting service config controller"
	I0725 18:50:27.846809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:50:27.846834       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:50:27.846837       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:50:27.847286       1 config.go:319] "Starting node config controller"
	I0725 18:50:27.847312       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:50:27.947499       1 shared_informer.go:320] Caches are synced for node config
	I0725 18:50:27.947539       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:50:27.947598       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] <==
	I0725 18:50:24.243355       1 serving.go:380] Generated self-signed cert in-memory
	W0725 18:50:26.500954       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:50:26.501111       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:50:26.501145       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:50:26.501215       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:50:26.619761       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 18:50:26.619790       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:26.621785       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:50:26.621908       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:50:26.625823       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:50:26.625893       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 18:50:26.722885       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:10:23 embed-certs-646344 kubelet[932]: E0725 19:10:23.179934     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:10:35 embed-certs-646344 kubelet[932]: E0725 19:10:35.175660     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:10:46 embed-certs-646344 kubelet[932]: E0725 19:10:46.175720     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:10:59 embed-certs-646344 kubelet[932]: E0725 19:10:59.175192     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:11:13 embed-certs-646344 kubelet[932]: E0725 19:11:13.175390     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:11:22 embed-certs-646344 kubelet[932]: E0725 19:11:22.190895     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:11:22 embed-certs-646344 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:11:22 embed-certs-646344 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:11:22 embed-certs-646344 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:11:22 embed-certs-646344 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:11:26 embed-certs-646344 kubelet[932]: E0725 19:11:26.176326     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:11:37 embed-certs-646344 kubelet[932]: E0725 19:11:37.176168     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:11:50 embed-certs-646344 kubelet[932]: E0725 19:11:50.189276     932 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:11:50 embed-certs-646344 kubelet[932]: E0725 19:11:50.189610     932 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:11:50 embed-certs-646344 kubelet[932]: E0725 19:11:50.189888     932 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmsv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-4gcts_kube-system(688239e2-95b8-4344-b3e5-5199f9504a19): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 25 19:11:50 embed-certs-646344 kubelet[932]: E0725 19:11:50.190003     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:12:05 embed-certs-646344 kubelet[932]: E0725 19:12:05.177037     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:12:19 embed-certs-646344 kubelet[932]: E0725 19:12:19.176687     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:12:22 embed-certs-646344 kubelet[932]: E0725 19:12:22.193005     932 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:12:22 embed-certs-646344 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:12:22 embed-certs-646344 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:12:22 embed-certs-646344 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:12:22 embed-certs-646344 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:12:31 embed-certs-646344 kubelet[932]: E0725 19:12:31.176035     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	Jul 25 19:12:46 embed-certs-646344 kubelet[932]: E0725 19:12:46.177215     932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-4gcts" podUID="688239e2-95b8-4344-b3e5-5199f9504a19"
	
	
	==> storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] <==
	I0725 18:50:27.766950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:50:57.769816       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] <==
	I0725 18:50:58.453399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:50:58.464107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:50:58.464278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:50:58.483189       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:50:58.483413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5!
	I0725 18:50:58.485801       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3fd3587c-afb1-4221-a023-d925e899bfae", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5 became leader
	I0725 18:50:58.584350       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-646344_202205bb-0139-4955-969b-81fbf5fd7ef5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-646344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-4gcts
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts: exit status 1 (81.812129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-4gcts" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-646344 describe pod metrics-server-569cc877fc-4gcts: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-371663 -n no-preload-371663
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-25 19:09:35.153520039 +0000 UTC m=+6049.234251887
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-371663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-371663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.357µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-371663 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-371663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-371663 logs -n 25: (1.19387038s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC | 25 Jul 24 19:09 UTC |
	| start   | -p auto-889508 --memory=3072                           | auto-889508                  | jenkins | v1.33.1 | 25 Jul 24 19:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 19:09:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 19:09:12.428832   66554 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:09:12.428962   66554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:09:12.428972   66554 out.go:304] Setting ErrFile to fd 2...
	I0725 19:09:12.428979   66554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:09:12.429187   66554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 19:09:12.429738   66554 out.go:298] Setting JSON to false
	I0725 19:09:12.430679   66554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6696,"bootTime":1721927856,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 19:09:12.430740   66554 start.go:139] virtualization: kvm guest
	I0725 19:09:12.432852   66554 out.go:177] * [auto-889508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 19:09:12.434663   66554 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:09:12.434675   66554 notify.go:220] Checking for updates...
	I0725 19:09:12.437508   66554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:09:12.438946   66554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 19:09:12.440076   66554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:12.441254   66554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 19:09:12.442382   66554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:09:12.443789   66554 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:09:12.443876   66554 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 19:09:12.443958   66554 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 19:09:12.444027   66554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:09:12.481134   66554 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 19:09:12.482359   66554 start.go:297] selected driver: kvm2
	I0725 19:09:12.482378   66554 start.go:901] validating driver "kvm2" against <nil>
	I0725 19:09:12.482393   66554 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:09:12.483107   66554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:09:12.483213   66554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 19:09:12.498125   66554 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 19:09:12.498198   66554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 19:09:12.498521   66554 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:09:12.498566   66554 cni.go:84] Creating CNI manager for ""
	I0725 19:09:12.498577   66554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 19:09:12.498588   66554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 19:09:12.498700   66554 start.go:340] cluster config:
	{Name:auto-889508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:09:12.498840   66554 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:09:12.501354   66554 out.go:177] * Starting "auto-889508" primary control-plane node in "auto-889508" cluster
	I0725 19:09:12.502527   66554 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 19:09:12.502556   66554 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 19:09:12.502565   66554 cache.go:56] Caching tarball of preloaded images
	I0725 19:09:12.502670   66554 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 19:09:12.502684   66554 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 19:09:12.502786   66554 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/config.json ...
	I0725 19:09:12.502805   66554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/auto-889508/config.json: {Name:mk22c0beed37692fed1384ba7d1ee5512291116c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:09:12.502979   66554 start.go:360] acquireMachinesLock for auto-889508: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 19:09:12.503011   66554 start.go:364] duration metric: took 16.984µs to acquireMachinesLock for "auto-889508"
	I0725 19:09:12.503033   66554 start.go:93] Provisioning new machine with config: &{Name:auto-889508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-889508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 19:09:12.503109   66554 start.go:125] createHost starting for "" (driver="kvm2")
	I0725 19:09:12.504550   66554 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 19:09:12.504672   66554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 19:09:12.504705   66554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 19:09:12.518624   66554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0725 19:09:12.519070   66554 main.go:141] libmachine: () Calling .GetVersion
	I0725 19:09:12.519657   66554 main.go:141] libmachine: Using API Version  1
	I0725 19:09:12.519672   66554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 19:09:12.520007   66554 main.go:141] libmachine: () Calling .GetMachineName
	I0725 19:09:12.520226   66554 main.go:141] libmachine: (auto-889508) Calling .GetMachineName
	I0725 19:09:12.520386   66554 main.go:141] libmachine: (auto-889508) Calling .DriverName
	I0725 19:09:12.520541   66554 start.go:159] libmachine.API.Create for "auto-889508" (driver="kvm2")
	I0725 19:09:12.520567   66554 client.go:168] LocalClient.Create starting
	I0725 19:09:12.520606   66554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem
	I0725 19:09:12.520643   66554 main.go:141] libmachine: Decoding PEM data...
	I0725 19:09:12.520662   66554 main.go:141] libmachine: Parsing certificate...
	I0725 19:09:12.520734   66554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem
	I0725 19:09:12.520758   66554 main.go:141] libmachine: Decoding PEM data...
	I0725 19:09:12.520774   66554 main.go:141] libmachine: Parsing certificate...
	I0725 19:09:12.520799   66554 main.go:141] libmachine: Running pre-create checks...
	I0725 19:09:12.520811   66554 main.go:141] libmachine: (auto-889508) Calling .PreCreateCheck
	I0725 19:09:12.521248   66554 main.go:141] libmachine: (auto-889508) Calling .GetConfigRaw
	I0725 19:09:12.521726   66554 main.go:141] libmachine: Creating machine...
	I0725 19:09:12.521745   66554 main.go:141] libmachine: (auto-889508) Calling .Create
	I0725 19:09:12.521920   66554 main.go:141] libmachine: (auto-889508) Creating KVM machine...
	I0725 19:09:12.523280   66554 main.go:141] libmachine: (auto-889508) DBG | found existing default KVM network
	I0725 19:09:12.524851   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:12.524716   66577 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015950}
	I0725 19:09:12.524869   66554 main.go:141] libmachine: (auto-889508) DBG | created network xml: 
	I0725 19:09:12.524882   66554 main.go:141] libmachine: (auto-889508) DBG | <network>
	I0725 19:09:12.524891   66554 main.go:141] libmachine: (auto-889508) DBG |   <name>mk-auto-889508</name>
	I0725 19:09:12.524899   66554 main.go:141] libmachine: (auto-889508) DBG |   <dns enable='no'/>
	I0725 19:09:12.524906   66554 main.go:141] libmachine: (auto-889508) DBG |   
	I0725 19:09:12.524917   66554 main.go:141] libmachine: (auto-889508) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0725 19:09:12.524929   66554 main.go:141] libmachine: (auto-889508) DBG |     <dhcp>
	I0725 19:09:12.524939   66554 main.go:141] libmachine: (auto-889508) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0725 19:09:12.524949   66554 main.go:141] libmachine: (auto-889508) DBG |     </dhcp>
	I0725 19:09:12.524978   66554 main.go:141] libmachine: (auto-889508) DBG |   </ip>
	I0725 19:09:12.524998   66554 main.go:141] libmachine: (auto-889508) DBG |   
	I0725 19:09:12.525009   66554 main.go:141] libmachine: (auto-889508) DBG | </network>
	I0725 19:09:12.525019   66554 main.go:141] libmachine: (auto-889508) DBG | 
	I0725 19:09:12.530025   66554 main.go:141] libmachine: (auto-889508) DBG | trying to create private KVM network mk-auto-889508 192.168.39.0/24...
	I0725 19:09:12.603261   66554 main.go:141] libmachine: (auto-889508) DBG | private KVM network mk-auto-889508 192.168.39.0/24 created
	I0725 19:09:12.603296   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:12.603253   66577 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:12.603310   66554 main.go:141] libmachine: (auto-889508) Setting up store path in /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508 ...
	I0725 19:09:12.603326   66554 main.go:141] libmachine: (auto-889508) Building disk image from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 19:09:12.603449   66554 main.go:141] libmachine: (auto-889508) Downloading /home/jenkins/minikube-integration/19326-5877/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0725 19:09:12.846823   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:12.846712   66577 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/id_rsa...
	I0725 19:09:12.971022   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:12.970892   66577 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/auto-889508.rawdisk...
	I0725 19:09:12.971673   66554 main.go:141] libmachine: (auto-889508) DBG | Writing magic tar header
	I0725 19:09:12.971695   66554 main.go:141] libmachine: (auto-889508) DBG | Writing SSH key tar header
	I0725 19:09:12.972303   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:12.972220   66577 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508 ...
	I0725 19:09:12.972408   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508
	I0725 19:09:12.972456   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube/machines
	I0725 19:09:12.972478   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 19:09:12.972491   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508 (perms=drwx------)
	I0725 19:09:12.972504   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube/machines (perms=drwxr-xr-x)
	I0725 19:09:12.972517   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877/.minikube (perms=drwxr-xr-x)
	I0725 19:09:12.972530   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19326-5877
	I0725 19:09:12.972545   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0725 19:09:12.972556   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home/jenkins
	I0725 19:09:12.972567   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins/minikube-integration/19326-5877 (perms=drwxrwxr-x)
	I0725 19:09:12.972577   66554 main.go:141] libmachine: (auto-889508) DBG | Checking permissions on dir: /home
	I0725 19:09:12.972588   66554 main.go:141] libmachine: (auto-889508) DBG | Skipping /home - not owner
	I0725 19:09:12.972601   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0725 19:09:12.972624   66554 main.go:141] libmachine: (auto-889508) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0725 19:09:12.972644   66554 main.go:141] libmachine: (auto-889508) Creating domain...
	I0725 19:09:12.973713   66554 main.go:141] libmachine: (auto-889508) define libvirt domain using xml: 
	I0725 19:09:12.973733   66554 main.go:141] libmachine: (auto-889508) <domain type='kvm'>
	I0725 19:09:12.973817   66554 main.go:141] libmachine: (auto-889508)   <name>auto-889508</name>
	I0725 19:09:12.973845   66554 main.go:141] libmachine: (auto-889508)   <memory unit='MiB'>3072</memory>
	I0725 19:09:12.973855   66554 main.go:141] libmachine: (auto-889508)   <vcpu>2</vcpu>
	I0725 19:09:12.973859   66554 main.go:141] libmachine: (auto-889508)   <features>
	I0725 19:09:12.973865   66554 main.go:141] libmachine: (auto-889508)     <acpi/>
	I0725 19:09:12.973868   66554 main.go:141] libmachine: (auto-889508)     <apic/>
	I0725 19:09:12.973876   66554 main.go:141] libmachine: (auto-889508)     <pae/>
	I0725 19:09:12.973884   66554 main.go:141] libmachine: (auto-889508)     
	I0725 19:09:12.973892   66554 main.go:141] libmachine: (auto-889508)   </features>
	I0725 19:09:12.973896   66554 main.go:141] libmachine: (auto-889508)   <cpu mode='host-passthrough'>
	I0725 19:09:12.973903   66554 main.go:141] libmachine: (auto-889508)   
	I0725 19:09:12.973909   66554 main.go:141] libmachine: (auto-889508)   </cpu>
	I0725 19:09:12.973915   66554 main.go:141] libmachine: (auto-889508)   <os>
	I0725 19:09:12.973919   66554 main.go:141] libmachine: (auto-889508)     <type>hvm</type>
	I0725 19:09:12.973924   66554 main.go:141] libmachine: (auto-889508)     <boot dev='cdrom'/>
	I0725 19:09:12.973929   66554 main.go:141] libmachine: (auto-889508)     <boot dev='hd'/>
	I0725 19:09:12.973934   66554 main.go:141] libmachine: (auto-889508)     <bootmenu enable='no'/>
	I0725 19:09:12.973941   66554 main.go:141] libmachine: (auto-889508)   </os>
	I0725 19:09:12.973946   66554 main.go:141] libmachine: (auto-889508)   <devices>
	I0725 19:09:12.973952   66554 main.go:141] libmachine: (auto-889508)     <disk type='file' device='cdrom'>
	I0725 19:09:12.973962   66554 main.go:141] libmachine: (auto-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/boot2docker.iso'/>
	I0725 19:09:12.973971   66554 main.go:141] libmachine: (auto-889508)       <target dev='hdc' bus='scsi'/>
	I0725 19:09:12.973976   66554 main.go:141] libmachine: (auto-889508)       <readonly/>
	I0725 19:09:12.973979   66554 main.go:141] libmachine: (auto-889508)     </disk>
	I0725 19:09:12.973985   66554 main.go:141] libmachine: (auto-889508)     <disk type='file' device='disk'>
	I0725 19:09:12.973993   66554 main.go:141] libmachine: (auto-889508)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0725 19:09:12.974017   66554 main.go:141] libmachine: (auto-889508)       <source file='/home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/auto-889508.rawdisk'/>
	I0725 19:09:12.974029   66554 main.go:141] libmachine: (auto-889508)       <target dev='hda' bus='virtio'/>
	I0725 19:09:12.974035   66554 main.go:141] libmachine: (auto-889508)     </disk>
	I0725 19:09:12.974042   66554 main.go:141] libmachine: (auto-889508)     <interface type='network'>
	I0725 19:09:12.974049   66554 main.go:141] libmachine: (auto-889508)       <source network='mk-auto-889508'/>
	I0725 19:09:12.974058   66554 main.go:141] libmachine: (auto-889508)       <model type='virtio'/>
	I0725 19:09:12.974114   66554 main.go:141] libmachine: (auto-889508)     </interface>
	I0725 19:09:12.974137   66554 main.go:141] libmachine: (auto-889508)     <interface type='network'>
	I0725 19:09:12.974150   66554 main.go:141] libmachine: (auto-889508)       <source network='default'/>
	I0725 19:09:12.974219   66554 main.go:141] libmachine: (auto-889508)       <model type='virtio'/>
	I0725 19:09:12.974245   66554 main.go:141] libmachine: (auto-889508)     </interface>
	I0725 19:09:12.974250   66554 main.go:141] libmachine: (auto-889508)     <serial type='pty'>
	I0725 19:09:12.974256   66554 main.go:141] libmachine: (auto-889508)       <target port='0'/>
	I0725 19:09:12.974261   66554 main.go:141] libmachine: (auto-889508)     </serial>
	I0725 19:09:12.974272   66554 main.go:141] libmachine: (auto-889508)     <console type='pty'>
	I0725 19:09:12.974280   66554 main.go:141] libmachine: (auto-889508)       <target type='serial' port='0'/>
	I0725 19:09:12.974286   66554 main.go:141] libmachine: (auto-889508)     </console>
	I0725 19:09:12.974296   66554 main.go:141] libmachine: (auto-889508)     <rng model='virtio'>
	I0725 19:09:12.974304   66554 main.go:141] libmachine: (auto-889508)       <backend model='random'>/dev/random</backend>
	I0725 19:09:12.974314   66554 main.go:141] libmachine: (auto-889508)     </rng>
	I0725 19:09:12.974319   66554 main.go:141] libmachine: (auto-889508)     
	I0725 19:09:12.974329   66554 main.go:141] libmachine: (auto-889508)     
	I0725 19:09:12.974334   66554 main.go:141] libmachine: (auto-889508)   </devices>
	I0725 19:09:12.974340   66554 main.go:141] libmachine: (auto-889508) </domain>
	I0725 19:09:12.974347   66554 main.go:141] libmachine: (auto-889508) 
	I0725 19:09:12.978362   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:69:21:02 in network default
	I0725 19:09:12.979054   66554 main.go:141] libmachine: (auto-889508) Ensuring networks are active...
	I0725 19:09:12.979073   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:12.979810   66554 main.go:141] libmachine: (auto-889508) Ensuring network default is active
	I0725 19:09:12.980136   66554 main.go:141] libmachine: (auto-889508) Ensuring network mk-auto-889508 is active
	I0725 19:09:12.980614   66554 main.go:141] libmachine: (auto-889508) Getting domain xml...
	I0725 19:09:12.981266   66554 main.go:141] libmachine: (auto-889508) Creating domain...
	I0725 19:09:14.232850   66554 main.go:141] libmachine: (auto-889508) Waiting to get IP...
	I0725 19:09:14.233604   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:14.234053   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:14.234068   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:14.234033   66577 retry.go:31] will retry after 199.418331ms: waiting for machine to come up
	I0725 19:09:14.435512   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:14.436097   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:14.436126   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:14.436010   66577 retry.go:31] will retry after 314.895554ms: waiting for machine to come up
	I0725 19:09:14.752711   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:14.753280   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:14.753308   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:14.753255   66577 retry.go:31] will retry after 487.147268ms: waiting for machine to come up
	I0725 19:09:15.241677   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:15.242235   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:15.242263   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:15.242166   66577 retry.go:31] will retry after 485.598443ms: waiting for machine to come up
	I0725 19:09:15.729903   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:15.730369   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:15.730394   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:15.730326   66577 retry.go:31] will retry after 721.062478ms: waiting for machine to come up
	I0725 19:09:16.453308   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:16.453766   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:16.453795   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:16.453722   66577 retry.go:31] will retry after 913.373879ms: waiting for machine to come up
	I0725 19:09:17.369531   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:17.370015   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:17.370053   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:17.369973   66577 retry.go:31] will retry after 1.050223227s: waiting for machine to come up
	I0725 19:09:18.421535   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:18.422216   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:18.422241   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:18.422163   66577 retry.go:31] will retry after 1.305118933s: waiting for machine to come up
	I0725 19:09:19.729374   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:19.729826   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:19.729850   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:19.729783   66577 retry.go:31] will retry after 1.230791138s: waiting for machine to come up
	I0725 19:09:20.962666   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:20.963240   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:20.963284   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:20.963214   66577 retry.go:31] will retry after 1.794356329s: waiting for machine to come up
	I0725 19:09:22.758982   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:22.759445   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:22.759474   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:22.759386   66577 retry.go:31] will retry after 2.682055006s: waiting for machine to come up
	I0725 19:09:25.445266   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:25.445727   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:25.445750   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:25.445673   66577 retry.go:31] will retry after 2.255539425s: waiting for machine to come up
	I0725 19:09:27.703014   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:27.703491   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find current IP address of domain auto-889508 in network mk-auto-889508
	I0725 19:09:27.703513   66554 main.go:141] libmachine: (auto-889508) DBG | I0725 19:09:27.703434   66577 retry.go:31] will retry after 4.346218496s: waiting for machine to come up
	I0725 19:09:32.054791   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:32.055261   66554 main.go:141] libmachine: (auto-889508) Found IP for machine: 192.168.39.77
	I0725 19:09:32.055283   66554 main.go:141] libmachine: (auto-889508) Reserving static IP address...
	I0725 19:09:32.055296   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has current primary IP address 192.168.39.77 and MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:32.055686   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find host DHCP lease matching {name: "auto-889508", mac: "52:54:00:b3:8a:40", ip: "192.168.39.77"} in network mk-auto-889508
	I0725 19:09:32.133121   66554 main.go:141] libmachine: (auto-889508) DBG | Getting to WaitForSSH function...
	I0725 19:09:32.133151   66554 main.go:141] libmachine: (auto-889508) Reserved static IP address: 192.168.39.77
	I0725 19:09:32.133169   66554 main.go:141] libmachine: (auto-889508) Waiting for SSH to be available...
	I0725 19:09:32.135860   66554 main.go:141] libmachine: (auto-889508) DBG | domain auto-889508 has defined MAC address 52:54:00:b3:8a:40 in network mk-auto-889508
	I0725 19:09:32.136235   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b3:8a:40", ip: ""} in network mk-auto-889508
	I0725 19:09:32.136262   66554 main.go:141] libmachine: (auto-889508) DBG | unable to find defined IP address of network mk-auto-889508 interface with MAC address 52:54:00:b3:8a:40
	I0725 19:09:32.136443   66554 main.go:141] libmachine: (auto-889508) DBG | Using SSH client type: external
	I0725 19:09:32.136484   66554 main.go:141] libmachine: (auto-889508) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/id_rsa (-rw-------)
	I0725 19:09:32.136516   66554 main.go:141] libmachine: (auto-889508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/auto-889508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 19:09:32.136529   66554 main.go:141] libmachine: (auto-889508) DBG | About to run SSH command:
	I0725 19:09:32.136544   66554 main.go:141] libmachine: (auto-889508) DBG | exit 0
	I0725 19:09:32.140208   66554 main.go:141] libmachine: (auto-889508) DBG | SSH cmd err, output: exit status 255: 
	I0725 19:09:32.140242   66554 main.go:141] libmachine: (auto-889508) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0725 19:09:32.140253   66554 main.go:141] libmachine: (auto-889508) DBG | command : exit 0
	I0725 19:09:32.140263   66554 main.go:141] libmachine: (auto-889508) DBG | err     : exit status 255
	I0725 19:09:32.140273   66554 main.go:141] libmachine: (auto-889508) DBG | output  : 
	
	
	==> CRI-O <==
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.757863261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934575757843261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5ce1687-8b7e-4e17-b93d-a254f27637da name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.758456054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43c52ab7-7c63-4786-9b82-7356a81b6910 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.758532289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43c52ab7-7c63-4786-9b82-7356a81b6910 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.758717634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43c52ab7-7c63-4786-9b82-7356a81b6910 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.802909275Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aad8ad55-16fb-4cfe-b028-9b6e88ef8090 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.803168366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aad8ad55-16fb-4cfe-b028-9b6e88ef8090 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.804614239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b087d492-d258-4f05-adcc-8eea5cae4570 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.805299066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934575805264066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b087d492-d258-4f05-adcc-8eea5cae4570 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.806194583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55e45618-edab-4a22-b461-0e9342819e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.806284191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55e45618-edab-4a22-b461-0e9342819e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.806740554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55e45618-edab-4a22-b461-0e9342819e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.846774145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34383b42-fe95-4179-ace5-9c6a79e2922c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.846886076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34383b42-fe95-4179-ace5-9c6a79e2922c name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.850240773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6bf90fa-f158-405d-9739-f6e60001553c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.850617968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934575850594896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6bf90fa-f158-405d-9739-f6e60001553c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.851252157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1630dee8-e602-47b6-bce1-9f1742700b57 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.851443852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1630dee8-e602-47b6-bce1-9f1742700b57 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.851855369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1630dee8-e602-47b6-bce1-9f1742700b57 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.886534639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90314411-e22c-4c45-bfa3-2892e556bcfd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.886646030Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90314411-e22c-4c45-bfa3-2892e556bcfd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.887704178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ef806b9-e549-44a9-a5cc-6fa84b9e6a9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.888128640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934575888100978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ef806b9-e549-44a9-a5cc-6fa84b9e6a9a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.888751010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d74bd88c-60ab-40cf-883d-7459272fbb40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.888805977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d74bd88c-60ab-40cf-883d-7459272fbb40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:35 no-preload-371663 crio[728]: time="2024-07-25 19:09:35.889041306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721933485879283164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6b4b69fb05b208b705ffd093d0a15286a06e3ca32c24e4e66e19af235036eb,PodSandboxId:47feb5223f7c64f775b7350b826ee6cfd7b40d3555cf6d1a45f0a1cc2be70b99,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721933465581590291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a19fc6a-6194-4c15-8414-a7c7da162bce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956,PodSandboxId:dd569d9d1412e29902cf9547fdcff22180a9dfd1b9987100901b7ef0e51f5ca9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721933462768123994,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-lq97z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 035503b5-7acf-4f42-a057-b3346c9b9704,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c,PodSandboxId:c3cb40b3caaf36e9d0d955290d1ac2132eb00f33064acfadbca6c202c32fe866,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721933455065603157,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
cd1c25d-32bd-4190-9e85-9629d6ea8bd0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9,PodSandboxId:eee515d54dbb47c3808abcbfb394f5976eafeb9d9fcd9f55b178c9380310b3fb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721933455046807784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bf9rt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65cbe378-8c6b-4034-9882-fc55c4eeca
38,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c,PodSandboxId:b553599d71041952311b42a6265e64ad21c76f4d30501b1bff26a61ffa1d1571,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721933451373806223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14ebf141570a844b702c633bb7812c81,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89,PodSandboxId:bbc06dc8cc834d05ee800333c515d21d8c6a7699a0e92987edc6fbc76119f384,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721933451333722580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54de6bb68dc2494531feca4bfce0b
0d0,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3,PodSandboxId:617f7785e5fb01ee554971e43aa513134fe6c09c68d7e9a0fa5294ad2da72c58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721933451324991803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5942a85da7f3ab1db659e97700580b8b,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc,PodSandboxId:a6178b5d395817a38e2b58c56463600fbb669ad157198d5a5446c7562545a27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721933451272611738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-371663,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14c4bf6b35e683c9ab68c44d7fba2957,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d74bd88c-60ab-40cf-883d-7459272fbb40 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dcdeb74e65467       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   c3cb40b3caaf3       storage-provisioner
	ac6b4b69fb05b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   47feb5223f7c6       busybox
	143f91ca28541       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   dd569d9d1412e       coredns-5cfdc65f69-lq97z
	e99e6f0bcc37c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       1                   c3cb40b3caaf3       storage-provisioner
	6b9d65c951729       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      18 minutes ago      Running             kube-proxy                1                   eee515d54dbb4       kube-proxy-bf9rt
	86a55c3ce8aca       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      18 minutes ago      Running             kube-apiserver            1                   b553599d71041       kube-apiserver-no-preload-371663
	f55693d23f976       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      18 minutes ago      Running             kube-controller-manager   1                   bbc06dc8cc834       kube-controller-manager-no-preload-371663
	e8502ebc3bc8f       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      18 minutes ago      Running             kube-scheduler            1                   617f7785e5fb0       kube-scheduler-no-preload-371663
	5b4489bee34a4       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      18 minutes ago      Running             etcd                      1                   a6178b5d39581       etcd-no-preload-371663
	
	
	==> coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58506 - 48972 "HINFO IN 3742996015109382260.5669205391674193530. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009742591s
	
	
	==> describe nodes <==
	Name:               no-preload-371663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-371663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=no-preload-371663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_41_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:40:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-371663
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:09:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:06:42 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:06:42 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:06:42 +0000   Thu, 25 Jul 2024 18:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:06:42 +0000   Thu, 25 Jul 2024 18:51:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.62
	  Hostname:    no-preload-371663
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f820fa13c6425fac15cb7471f7543e
	  System UUID:                78f820fa-13c6-425f-ac15-cb7471f7543e
	  Boot ID:                    cfafaa54-5894-431e-8aa7-1cae14472e72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5cfdc65f69-lq97z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-371663                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-371663             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-371663    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-bf9rt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-371663             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-zthnk              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-371663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-371663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-371663 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-371663 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-371663 event: Registered Node no-preload-371663 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-371663 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-371663 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-371663 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-371663 event: Registered Node no-preload-371663 in Controller
	
	
	==> dmesg <==
	[Jul25 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000002] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051273] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.933748] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.045778] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.554814] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.682576] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.056397] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077045] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.158331] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.158946] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.256781] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +14.688749] systemd-fstab-generator[1177]: Ignoring "noauto" option for root device
	[  +0.058043] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835100] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +4.578737] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.424965] systemd-fstab-generator[1924]: Ignoring "noauto" option for root device
	[Jul25 18:51] kauditd_printk_skb: 61 callbacks suppressed
	[ +24.194477] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] <==
	{"level":"info","ts":"2024-07-25T18:50:51.751557Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.62:2380"}
	{"level":"info","ts":"2024-07-25T18:50:51.751875Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7318547c71bbcda3","initial-advertise-peer-urls":["https://192.168.72.62:2380"],"listen-peer-urls":["https://192.168.72.62:2380"],"advertise-client-urls":["https://192.168.72.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:50:51.752786Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:50:53.109326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 received MsgPreVoteResp from 7318547c71bbcda3 at term 2"}
	{"level":"info","ts":"2024-07-25T18:50:53.109402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 received MsgVoteResp from 7318547c71bbcda3 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7318547c71bbcda3 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.109423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7318547c71bbcda3 elected leader 7318547c71bbcda3 at term 3"}
	{"level":"info","ts":"2024-07-25T18:50:53.113752Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7318547c71bbcda3","local-member-attributes":"{Name:no-preload-371663 ClientURLs:[https://192.168.72.62:2379]}","request-path":"/0/members/7318547c71bbcda3/attributes","cluster-id":"3beaf59f728f470","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:50:53.113917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:53.114164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:53.114192Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:50:53.11433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:50:53.115174Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:50:53.115188Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-25T18:50:53.116032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:50:53.1162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.62:2379"}
	{"level":"info","ts":"2024-07-25T19:00:53.159642Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2024-07-25T19:00:53.169867Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":852,"took":"9.527413ms","hash":988434894,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2723840,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-25T19:00:53.170028Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":988434894,"revision":852,"compact-revision":-1}
	{"level":"info","ts":"2024-07-25T19:05:53.166456Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1094}
	{"level":"info","ts":"2024-07-25T19:05:53.170319Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1094,"took":"3.23096ms","hash":1003881247,"current-db-size-bytes":2723840,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-25T19:05:53.170465Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1003881247,"revision":1094,"compact-revision":852}
	
	
	==> kernel <==
	 19:09:36 up 19 min,  0 users,  load average: 0.26, 0.10, 0.09
	Linux no-preload-371663 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] <==
	W0725 19:05:55.312248       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:05:55.312300       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0725 19:05:55.313455       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:05:55.313516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:06:55.313782       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:06:55.313851       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0725 19:06:55.314007       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:06:55.314109       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0725 19:06:55.315011       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:06:55.316232       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0725 19:08:55.315871       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:08:55.316021       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0725 19:08:55.317222       1 handler_proxy.go:99] no RequestInfo found in the context
	E0725 19:08:55.317319       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0725 19:08:55.317367       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:08:55.318479       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] <==
	E0725 19:04:28.745736       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:04:28.830339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:04:58.752517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:04:58.837704       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:05:28.759695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:05:28.845722       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:05:58.766184       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:05:58.853821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:06:28.772538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:06:28.862811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:06:42.295279       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-371663"
	E0725 19:06:58.778731       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:06:58.870907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0725 19:07:07.718273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="185.324µs"
	I0725 19:07:21.717793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="172.221µs"
	E0725 19:07:28.784862       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:07:28.884972       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:07:58.790872       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:07:58.892430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:28.797074       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:08:28.900870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:08:58.805334       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:08:58.910423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0725 19:09:28.811968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0725 19:09:28.918763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0725 18:50:55.236060       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0725 18:50:55.246714       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.62"]
	E0725 18:50:55.246796       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0725 18:50:55.308982       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0725 18:50:55.309071       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 18:50:55.309122       1 server_linux.go:170] "Using iptables Proxier"
	I0725 18:50:55.316532       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0725 18:50:55.316775       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0725 18:50:55.317041       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:55.329967       1 config.go:197] "Starting service config controller"
	I0725 18:50:55.330003       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:50:55.330037       1 config.go:104] "Starting endpoint slice config controller"
	I0725 18:50:55.330041       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:50:55.343812       1 config.go:326] "Starting node config controller"
	I0725 18:50:55.343838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:50:55.430207       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:50:55.430363       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:50:55.444023       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] <==
	I0725 18:50:52.465444       1 serving.go:386] Generated self-signed cert in-memory
	W0725 18:50:54.288443       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 18:50:54.288532       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:50:54.288571       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 18:50:54.288594       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 18:50:54.352726       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0725 18:50:54.355297       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:50:54.360416       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 18:50:54.361082       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0725 18:50:54.361018       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 18:50:54.365175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 18:50:54.465498       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 19:06:55 no-preload-371663 kubelet[1301]: E0725 19:06:55.717278    1301 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 25 19:06:55 no-preload-371663 kubelet[1301]: E0725 19:06:55.717510    1301 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqgs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-78fcd8795b-zthnk_kube-system(1cd7a284-6dd0-4052-966f-617028833a54): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jul 25 19:06:55 no-preload-371663 kubelet[1301]: E0725 19:06:55.719025    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:07:07 no-preload-371663 kubelet[1301]: E0725 19:07:07.704054    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:07:21 no-preload-371663 kubelet[1301]: E0725 19:07:21.704031    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:07:32 no-preload-371663 kubelet[1301]: E0725 19:07:32.704582    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:07:43 no-preload-371663 kubelet[1301]: E0725 19:07:43.703182    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:07:50 no-preload-371663 kubelet[1301]: E0725 19:07:50.717695    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:07:50 no-preload-371663 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:07:50 no-preload-371663 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:07:50 no-preload-371663 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:07:50 no-preload-371663 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:07:58 no-preload-371663 kubelet[1301]: E0725 19:07:58.703189    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:08:12 no-preload-371663 kubelet[1301]: E0725 19:08:12.705775    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:08:27 no-preload-371663 kubelet[1301]: E0725 19:08:27.703259    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:08:42 no-preload-371663 kubelet[1301]: E0725 19:08:42.706539    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:08:50 no-preload-371663 kubelet[1301]: E0725 19:08:50.718683    1301 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 25 19:08:50 no-preload-371663 kubelet[1301]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 25 19:08:50 no-preload-371663 kubelet[1301]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 25 19:08:50 no-preload-371663 kubelet[1301]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 25 19:08:50 no-preload-371663 kubelet[1301]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 25 19:08:56 no-preload-371663 kubelet[1301]: E0725 19:08:56.706366    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:09:09 no-preload-371663 kubelet[1301]: E0725 19:09:09.704094    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:09:20 no-preload-371663 kubelet[1301]: E0725 19:09:20.703806    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	Jul 25 19:09:33 no-preload-371663 kubelet[1301]: E0725 19:09:33.704497    1301 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-zthnk" podUID="1cd7a284-6dd0-4052-966f-617028833a54"
	
	
	==> storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] <==
	I0725 18:51:25.960864       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:51:25.970065       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:51:25.970248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:51:43.368816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:51:43.370109       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898!
	I0725 18:51:43.370416       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b09119ed-dae1-444e-8fd0-359a6539513b", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898 became leader
	I0725 18:51:43.470517       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-371663_5df74d8e-26b3-46c7-9d6e-571d4b0da898!
	
	
	==> storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] <==
	I0725 18:50:55.161420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 18:51:25.164472       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-371663 -n no-preload-371663
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-371663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-zthnk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk: exit status 1 (80.942678ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-zthnk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-371663 describe pod metrics-server-78fcd8795b-zthnk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (310.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (112.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.29:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.29:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (220.838777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-108542" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-108542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-108542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.637µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-108542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (213.751339ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-108542 logs -n 25: (1.575771537s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-979261                              | cert-expiration-979261       | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:42 UTC |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:42 UTC | 25 Jul 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819413             | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-819413                  | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-819413 --memory=2200 --alsologtostderr   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:43 UTC | 25 Jul 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-108542        | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| image   | newest-cni-819413 image list                           | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| delete  | -p newest-cni-819413                                   | newest-cni-819413            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:44 UTC |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-371663                  | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-371663 --memory=2200                     | no-preload-371663            | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:55 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-600433       | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-600433 | jenkins | v1.33.1 | 25 Jul 24 18:44 UTC | 25 Jul 24 18:54 UTC |
	|         | default-k8s-diff-port-600433                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-646344            | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-108542             | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC | 25 Jul 24 18:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-108542                              | old-k8s-version-108542       | jenkins | v1.33.1 | 25 Jul 24 18:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-646344                 | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-646344                                  | embed-certs-646344           | jenkins | v1.33.1 | 25 Jul 24 18:47 UTC | 25 Jul 24 18:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:47:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:47:51.335413   60732 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:51.335822   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.335880   60732 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:51.335900   60732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:51.336419   60732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:47:51.337339   60732 out.go:298] Setting JSON to false
	I0725 18:47:51.338209   60732 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5415,"bootTime":1721927856,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:47:51.338264   60732 start.go:139] virtualization: kvm guest
	I0725 18:47:51.340134   60732 out.go:177] * [embed-certs-646344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:47:51.341750   60732 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:47:51.341752   60732 notify.go:220] Checking for updates...
	I0725 18:47:51.344351   60732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:47:51.345770   60732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:47:51.346912   60732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:47:51.348038   60732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:47:51.349161   60732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:47:51.350578   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:47:51.350953   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.350991   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.365561   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0725 18:47:51.365978   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.366490   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.366509   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.366823   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.366999   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.367234   60732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:47:51.367497   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:47:51.367527   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:47:51.381639   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0725 18:47:51.381960   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:47:51.382381   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:47:51.382402   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:47:51.382685   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:47:51.382870   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:47:51.415199   60732 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 18:47:51.416470   60732 start.go:297] selected driver: kvm2
	I0725 18:47:51.416488   60732 start.go:901] validating driver "kvm2" against &{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.416607   60732 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:47:51.417317   60732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.417405   60732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 18:47:51.431942   60732 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 18:47:51.432284   60732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:47:51.432371   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:47:51.432386   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:47:51.432434   60732 start.go:340] cluster config:
	{Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:47:51.432535   60732 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:47:51.435012   60732 out.go:177] * Starting "embed-certs-646344" primary control-plane node in "embed-certs-646344" cluster
	I0725 18:47:53.472602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:47:51.436106   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:47:51.436136   60732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 18:47:51.436143   60732 cache.go:56] Caching tarball of preloaded images
	I0725 18:47:51.436215   60732 preload.go:172] Found /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0725 18:47:51.436238   60732 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0725 18:47:51.436365   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:47:51.436560   60732 start.go:360] acquireMachinesLock for embed-certs-646344: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:47:59.552616   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:02.624594   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:08.704607   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:11.776581   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:17.856602   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:20.928547   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:27.008590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:30.084604   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:36.160617   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:39.232633   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:45.312630   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:48.384662   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:54.464559   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:48:57.536621   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:03.616552   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:06.688590   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.773620   59645 start.go:364] duration metric: took 4m26.592394108s to acquireMachinesLock for "default-k8s-diff-port-600433"
	I0725 18:49:15.773683   59645 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:15.773694   59645 fix.go:54] fixHost starting: 
	I0725 18:49:15.774019   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:15.774051   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:15.789240   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0725 18:49:15.789740   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:15.790212   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:15.790233   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:15.790591   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:15.790845   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:15.791014   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:15.793113   59645 fix.go:112] recreateIfNeeded on default-k8s-diff-port-600433: state=Stopped err=<nil>
	I0725 18:49:15.793149   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	W0725 18:49:15.793313   59645 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:15.795191   59645 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-600433" ...
	I0725 18:49:12.768538   59378 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.62:22: connect: no route to host
	I0725 18:49:15.771150   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:15.771186   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771533   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:49:15.771558   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:49:15.771774   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:49:15.773458   59378 machine.go:97] duration metric: took 4m37.565633658s to provisionDockerMachine
	I0725 18:49:15.773505   59378 fix.go:56] duration metric: took 4m37.588536865s for fixHost
	I0725 18:49:15.773515   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 4m37.588577134s
	W0725 18:49:15.773539   59378 start.go:714] error starting host: provision: host is not running
	W0725 18:49:15.773622   59378 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0725 18:49:15.773634   59378 start.go:729] Will try again in 5 seconds ...
	I0725 18:49:15.796482   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Start
	I0725 18:49:15.796686   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring networks are active...
	I0725 18:49:15.797399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network default is active
	I0725 18:49:15.797752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Ensuring network mk-default-k8s-diff-port-600433 is active
	I0725 18:49:15.798080   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Getting domain xml...
	I0725 18:49:15.798673   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Creating domain...
	I0725 18:49:17.018432   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting to get IP...
	I0725 18:49:17.019400   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.019970   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.020072   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.019959   61066 retry.go:31] will retry after 308.610139ms: waiting for machine to come up
	I0725 18:49:17.330698   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331224   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.331257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.331162   61066 retry.go:31] will retry after 334.762083ms: waiting for machine to come up
	I0725 18:49:17.667824   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668211   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:17.668241   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:17.668158   61066 retry.go:31] will retry after 474.612313ms: waiting for machine to come up
	I0725 18:49:18.145035   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.145575   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.145498   61066 retry.go:31] will retry after 493.878098ms: waiting for machine to come up
	I0725 18:49:18.641257   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:18.641839   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:18.641705   61066 retry.go:31] will retry after 747.653142ms: waiting for machine to come up
	I0725 18:49:20.776022   59378 start.go:360] acquireMachinesLock for no-preload-371663: {Name:mk20c7a10d76951266581ba86debeac6b9496ec5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 18:49:19.390788   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391296   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:19.391327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:19.391237   61066 retry.go:31] will retry after 790.014184ms: waiting for machine to come up
	I0725 18:49:20.183244   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183733   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:20.183756   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:20.183676   61066 retry.go:31] will retry after 932.227483ms: waiting for machine to come up
	I0725 18:49:21.117548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.117989   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:21.118019   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:21.117947   61066 retry.go:31] will retry after 1.421954156s: waiting for machine to come up
	I0725 18:49:22.541650   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542032   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:22.542059   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:22.541972   61066 retry.go:31] will retry after 1.281624824s: waiting for machine to come up
	I0725 18:49:23.825380   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825721   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:23.825738   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:23.825700   61066 retry.go:31] will retry after 1.470467032s: waiting for machine to come up
	I0725 18:49:25.298488   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.298993   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:25.299016   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:25.298958   61066 retry.go:31] will retry after 2.857621922s: waiting for machine to come up
	I0725 18:49:28.157929   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158361   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:28.158387   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:28.158322   61066 retry.go:31] will retry after 2.354044303s: waiting for machine to come up
	I0725 18:49:30.514911   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515408   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | unable to find current IP address of domain default-k8s-diff-port-600433 in network mk-default-k8s-diff-port-600433
	I0725 18:49:30.515440   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | I0725 18:49:30.515361   61066 retry.go:31] will retry after 4.26590841s: waiting for machine to come up
	I0725 18:49:36.036943   60176 start.go:364] duration metric: took 3m49.551567331s to acquireMachinesLock for "old-k8s-version-108542"
	I0725 18:49:36.037007   60176 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:36.037018   60176 fix.go:54] fixHost starting: 
	I0725 18:49:36.037477   60176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:36.037517   60176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:36.055190   60176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0725 18:49:36.055631   60176 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:36.056086   60176 main.go:141] libmachine: Using API Version  1
	I0725 18:49:36.056105   60176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:36.056466   60176 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:36.056685   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:36.056862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetState
	I0725 18:49:36.058311   60176 fix.go:112] recreateIfNeeded on old-k8s-version-108542: state=Stopped err=<nil>
	I0725 18:49:36.058348   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	W0725 18:49:36.058530   60176 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:36.060822   60176 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-108542" ...
	I0725 18:49:36.062077   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .Start
	I0725 18:49:36.062241   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring networks are active...
	I0725 18:49:36.062926   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network default is active
	I0725 18:49:36.063329   60176 main.go:141] libmachine: (old-k8s-version-108542) Ensuring network mk-old-k8s-version-108542 is active
	I0725 18:49:36.063698   60176 main.go:141] libmachine: (old-k8s-version-108542) Getting domain xml...
	I0725 18:49:36.064367   60176 main.go:141] libmachine: (old-k8s-version-108542) Creating domain...
	I0725 18:49:34.786308   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786801   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Found IP for machine: 192.168.50.221
	I0725 18:49:34.786836   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has current primary IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.786848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserving static IP address...
	I0725 18:49:34.787187   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.787223   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | skip adding static IP to network mk-default-k8s-diff-port-600433 - found existing host DHCP lease matching {name: "default-k8s-diff-port-600433", mac: "52:54:00:ee:71:68", ip: "192.168.50.221"}
	I0725 18:49:34.787237   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Reserved static IP address: 192.168.50.221
	I0725 18:49:34.787251   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Getting to WaitForSSH function...
	I0725 18:49:34.787261   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Waiting for SSH to be available...
	I0725 18:49:34.789202   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789467   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.789494   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.789582   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH client type: external
	I0725 18:49:34.789608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa (-rw-------)
	I0725 18:49:34.789642   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:34.789656   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | About to run SSH command:
	I0725 18:49:34.789672   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | exit 0
	I0725 18:49:34.916303   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:34.916741   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetConfigRaw
	I0725 18:49:34.917309   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:34.919931   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920356   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.920388   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.920711   59645 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/config.json ...
	I0725 18:49:34.920952   59645 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:34.920973   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:34.921158   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:34.923280   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923663   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:34.923699   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:34.923782   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:34.923953   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924116   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:34.924367   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:34.924559   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:34.924778   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:34.924789   59645 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:35.036568   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:35.036605   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.036862   59645 buildroot.go:166] provisioning hostname "default-k8s-diff-port-600433"
	I0725 18:49:35.036890   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.037089   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.039523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.039891   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.039928   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.040048   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.040240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040409   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.040540   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.040696   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.040855   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.040867   59645 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-600433 && echo "default-k8s-diff-port-600433" | sudo tee /etc/hostname
	I0725 18:49:35.170553   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-600433
	
	I0725 18:49:35.170606   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.173260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173590   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.173615   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.173811   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.174057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174240   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.174402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.174606   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.174762   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.174798   59645 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-600433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-600433/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-600433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:35.292349   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:35.292387   59645 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:35.292425   59645 buildroot.go:174] setting up certificates
	I0725 18:49:35.292443   59645 provision.go:84] configureAuth start
	I0725 18:49:35.292456   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetMachineName
	I0725 18:49:35.292749   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:35.295317   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295628   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.295657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.295817   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.297815   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298114   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.298146   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.298330   59645 provision.go:143] copyHostCerts
	I0725 18:49:35.298373   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:35.298384   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:35.298461   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:35.298578   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:35.298590   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:35.298631   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:35.298725   59645 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:35.298735   59645 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:35.298767   59645 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:35.298846   59645 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-600433 san=[127.0.0.1 192.168.50.221 default-k8s-diff-port-600433 localhost minikube]
	I0725 18:49:35.385077   59645 provision.go:177] copyRemoteCerts
	I0725 18:49:35.385142   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:35.385168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.387858   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388165   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.388195   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.388399   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.388604   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.388760   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.388903   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.473920   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:35.496193   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0725 18:49:35.517673   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:35.538593   59645 provision.go:87] duration metric: took 246.139455ms to configureAuth
	I0725 18:49:35.538617   59645 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:35.538796   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:35.538860   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.541598   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542144   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.542168   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.542369   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.542548   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542664   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.542812   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.542937   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.543138   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.543167   59645 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:35.799471   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:35.799495   59645 machine.go:97] duration metric: took 878.530074ms to provisionDockerMachine
	I0725 18:49:35.799509   59645 start.go:293] postStartSetup for "default-k8s-diff-port-600433" (driver="kvm2")
	I0725 18:49:35.799526   59645 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:35.799569   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:35.799861   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:35.799916   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.802372   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.802776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.802882   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.803057   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.803200   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.803304   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:35.886188   59645 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:35.890053   59645 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:35.890090   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:35.890157   59645 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:35.890227   59645 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:35.890317   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:35.899121   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:35.921904   59645 start.go:296] duration metric: took 122.381588ms for postStartSetup
	I0725 18:49:35.921942   59645 fix.go:56] duration metric: took 20.148249245s for fixHost
	I0725 18:49:35.921960   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:35.924865   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925265   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:35.925300   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:35.925414   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:35.925608   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925761   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:35.925876   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:35.926011   59645 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:35.926191   59645 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.221 22 <nil> <nil>}
	I0725 18:49:35.926205   59645 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:36.036748   59645 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933376.013042854
	
	I0725 18:49:36.036779   59645 fix.go:216] guest clock: 1721933376.013042854
	I0725 18:49:36.036790   59645 fix.go:229] Guest: 2024-07-25 18:49:36.013042854 +0000 UTC Remote: 2024-07-25 18:49:35.921945116 +0000 UTC m=+286.890099623 (delta=91.097738ms)
	I0725 18:49:36.036855   59645 fix.go:200] guest clock delta is within tolerance: 91.097738ms
	I0725 18:49:36.036863   59645 start.go:83] releasing machines lock for "default-k8s-diff-port-600433", held for 20.263198657s
	I0725 18:49:36.036905   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.037178   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:36.040216   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040692   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.040717   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.040881   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041327   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041501   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:36.041596   59645 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:36.041657   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.041693   59645 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:36.041718   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:36.044433   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044752   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.044775   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.044799   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045030   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045191   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:36.045209   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045217   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:36.045375   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045476   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:36.045501   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.045648   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:36.045828   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:36.045988   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:36.158410   59645 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:36.164254   59645 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:36.305911   59645 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:36.312544   59645 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:36.312642   59645 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:36.327394   59645 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:36.327420   59645 start.go:495] detecting cgroup driver to use...
	I0725 18:49:36.327497   59645 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:36.342695   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:36.355528   59645 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:36.355593   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:36.369191   59645 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:36.382786   59645 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:36.498465   59645 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:36.635188   59645 docker.go:233] disabling docker service ...
	I0725 18:49:36.635272   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:36.655356   59645 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:36.671402   59645 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:36.819969   59645 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:36.961130   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:36.976459   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:36.995542   59645 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:49:36.995607   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.006967   59645 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:37.007041   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.017503   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.027807   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.037804   59645 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:37.047817   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.057895   59645 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.075586   59645 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:37.085987   59645 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:37.095527   59645 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:37.095593   59645 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:37.107540   59645 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:37.117409   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:37.246455   59645 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:37.383873   59645 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:37.383959   59645 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:37.388630   59645 start.go:563] Will wait 60s for crictl version
	I0725 18:49:37.388687   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:49:37.393190   59645 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:37.439603   59645 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:37.439688   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.468723   59645 ssh_runner.go:195] Run: crio --version
	I0725 18:49:37.501339   59645 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:49:37.502895   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetIP
	I0725 18:49:37.505728   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506098   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:37.506128   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:37.506341   59645 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:37.510432   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:37.523446   59645 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:37.523608   59645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:49:37.523691   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:37.561149   59645 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:49:37.561209   59645 ssh_runner.go:195] Run: which lz4
	I0725 18:49:37.565614   59645 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:37.569702   59645 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:37.569738   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:49:38.884355   59645 crio.go:462] duration metric: took 1.318757754s to copy over tarball
	I0725 18:49:38.884481   59645 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:37.310225   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting to get IP...
	I0725 18:49:37.311059   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.311480   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.311557   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.311444   61209 retry.go:31] will retry after 249.654633ms: waiting for machine to come up
	I0725 18:49:37.563210   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.563727   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.563774   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.563696   61209 retry.go:31] will retry after 360.974896ms: waiting for machine to come up
	I0725 18:49:37.926464   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:37.927033   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:37.927104   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:37.926935   61209 retry.go:31] will retry after 392.213819ms: waiting for machine to come up
	I0725 18:49:38.320659   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.321153   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.321182   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.321107   61209 retry.go:31] will retry after 443.035852ms: waiting for machine to come up
	I0725 18:49:38.765569   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:38.765972   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:38.765996   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:38.765944   61209 retry.go:31] will retry after 691.876502ms: waiting for machine to come up
	I0725 18:49:39.459944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:39.460308   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:39.460354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:39.460259   61209 retry.go:31] will retry after 870.093433ms: waiting for machine to come up
	I0725 18:49:40.331944   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:40.332382   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:40.332411   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:40.332301   61209 retry.go:31] will retry after 875.3931ms: waiting for machine to come up
	I0725 18:49:41.209789   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:41.210251   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:41.210275   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:41.210196   61209 retry.go:31] will retry after 1.355093494s: waiting for machine to come up
	I0725 18:49:41.126101   59645 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241583376s)
	I0725 18:49:41.126141   59645 crio.go:469] duration metric: took 2.24174402s to extract the tarball
	I0725 18:49:41.126152   59645 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:49:41.163655   59645 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:41.204248   59645 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:49:41.204270   59645 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:49:41.204278   59645 kubeadm.go:934] updating node { 192.168.50.221 8444 v1.30.3 crio true true} ...
	I0725 18:49:41.204442   59645 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-600433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:49:41.204506   59645 ssh_runner.go:195] Run: crio config
	I0725 18:49:41.248210   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:41.248239   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:41.248255   59645 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:49:41.248286   59645 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.221 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-600433 NodeName:default-k8s-diff-port-600433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:49:41.248491   59645 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.221
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-600433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:49:41.248591   59645 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:49:41.257987   59645 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:49:41.258057   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:49:41.267141   59645 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0725 18:49:41.283078   59645 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:49:41.299009   59645 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 18:49:41.315642   59645 ssh_runner.go:195] Run: grep 192.168.50.221	control-plane.minikube.internal$ /etc/hosts
	I0725 18:49:41.319267   59645 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:41.330435   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:41.453042   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:41.471864   59645 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433 for IP: 192.168.50.221
	I0725 18:49:41.471896   59645 certs.go:194] generating shared ca certs ...
	I0725 18:49:41.471915   59645 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:41.472098   59645 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:49:41.472151   59645 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:49:41.472163   59645 certs.go:256] generating profile certs ...
	I0725 18:49:41.472271   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.key
	I0725 18:49:41.472399   59645 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key.28cfcfe9
	I0725 18:49:41.472470   59645 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key
	I0725 18:49:41.472630   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:49:41.472681   59645 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:49:41.472696   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:49:41.472734   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:49:41.472768   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:49:41.472801   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:49:41.472875   59645 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:41.473783   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:49:41.519536   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:49:41.570915   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:49:41.596050   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:49:41.622290   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0725 18:49:41.644771   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:49:41.673056   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:49:41.698215   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:49:41.720502   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:49:41.742897   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:49:41.765403   59645 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:49:41.788097   59645 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:49:41.804016   59645 ssh_runner.go:195] Run: openssl version
	I0725 18:49:41.809451   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:49:41.819312   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823677   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.823731   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:49:41.829342   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:49:41.839245   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:49:41.848902   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852894   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.852948   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:49:41.858231   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:49:41.868414   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:49:41.878478   59645 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882534   59645 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.882596   59645 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:49:41.888100   59645 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:49:41.897994   59645 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:49:41.902066   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:49:41.907593   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:49:41.913339   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:49:41.918977   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:49:41.924846   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:49:41.931208   59645 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:49:41.936979   59645 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-600433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-600433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:49:41.937105   59645 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:49:41.937165   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:41.973862   59645 cri.go:89] found id: ""
	I0725 18:49:41.973954   59645 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:49:41.986980   59645 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:49:41.987006   59645 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:49:41.987059   59645 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:49:41.996155   59645 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:49:41.997176   59645 kubeconfig.go:125] found "default-k8s-diff-port-600433" server: "https://192.168.50.221:8444"
	I0725 18:49:41.999255   59645 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:49:42.007863   59645 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.221
	I0725 18:49:42.007898   59645 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:49:42.007910   59645 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:49:42.007950   59645 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:49:42.041234   59645 cri.go:89] found id: ""
	I0725 18:49:42.041344   59645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:49:42.057752   59645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:49:42.067347   59645 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:49:42.067367   59645 kubeadm.go:157] found existing configuration files:
	
	I0725 18:49:42.067414   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0725 18:49:42.075815   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:49:42.075862   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:49:42.084352   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0725 18:49:42.092738   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:49:42.092795   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:49:42.101917   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.110104   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:49:42.110171   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:49:42.118781   59645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0725 18:49:42.127369   59645 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:49:42.127417   59645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:49:42.136433   59645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:49:42.145402   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.256466   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:42.967465   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.180271   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.238156   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:43.333954   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:49:43.334063   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:43.834381   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:42.566588   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:42.567061   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:42.567089   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:42.567010   61209 retry.go:31] will retry after 1.670701174s: waiting for machine to come up
	I0725 18:49:44.238961   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:44.239359   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:44.239377   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:44.239329   61209 retry.go:31] will retry after 2.028917586s: waiting for machine to come up
	I0725 18:49:46.270213   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:46.270674   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:46.270695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:46.270630   61209 retry.go:31] will retry after 2.760614678s: waiting for machine to come up
	I0725 18:49:44.335103   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:44.835115   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.334875   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.834915   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:49:45.849684   59645 api_server.go:72] duration metric: took 2.515729384s to wait for apiserver process to appear ...
	I0725 18:49:45.849717   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:49:45.849752   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.417830   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:49:48.417861   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:49:48.417898   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.496770   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.496823   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:48.850275   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:48.854417   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:48.854446   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.350652   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.356554   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:49:49.356585   59645 api_server.go:103] status: https://192.168.50.221:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:49:49.849872   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:49:49.855690   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:49:49.863742   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:49:49.863770   59645 api_server.go:131] duration metric: took 4.014045168s to wait for apiserver health ...
	I0725 18:49:49.863780   59645 cni.go:84] Creating CNI manager for ""
	I0725 18:49:49.863788   59645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:49:49.865438   59645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:49:49.034670   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:49.035109   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:49.035136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:49.035073   61209 retry.go:31] will retry after 2.928049351s: waiting for machine to come up
	I0725 18:49:49.866747   59645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:49:49.877963   59645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:49:49.898915   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:49:49.916996   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:49:49.917037   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:49:49.917049   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:49:49.917067   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:49:49.917080   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:49:49.917093   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:49:49.917105   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:49:49.917112   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:49:49.917120   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:49:49.917127   59645 system_pods.go:74] duration metric: took 18.191827ms to wait for pod list to return data ...
	I0725 18:49:49.917145   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:49:49.921009   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:49:49.921032   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:49:49.921046   59645 node_conditions.go:105] duration metric: took 3.893327ms to run NodePressure ...
	I0725 18:49:49.921064   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:49:50.188485   59645 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192676   59645 kubeadm.go:739] kubelet initialised
	I0725 18:49:50.192696   59645 kubeadm.go:740] duration metric: took 4.188813ms waiting for restarted kubelet to initialise ...
	I0725 18:49:50.192710   59645 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:50.197736   59645 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.203856   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203881   59645 pod_ready.go:81] duration metric: took 6.126055ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.203891   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.203897   59645 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.209211   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209233   59645 pod_ready.go:81] duration metric: took 5.32855ms for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.209242   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.209248   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.216079   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216104   59645 pod_ready.go:81] duration metric: took 6.848427ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.216115   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.216122   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.301694   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301718   59645 pod_ready.go:81] duration metric: took 85.5884ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.301728   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.301735   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:50.702363   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702392   59645 pod_ready.go:81] duration metric: took 400.649914ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:50.702400   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-proxy-smhmv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:50.702406   59645 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.102906   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102943   59645 pod_ready.go:81] duration metric: took 400.527709ms for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.102955   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.102964   59645 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:51.502187   59645 pod_ready.go:97] node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502217   59645 pod_ready.go:81] duration metric: took 399.245254ms for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:49:51.502228   59645 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-600433" hosting pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.502235   59645 pod_ready.go:38] duration metric: took 1.309515361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:51.502249   59645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:49:51.513796   59645 ops.go:34] apiserver oom_adj: -16
	I0725 18:49:51.513816   59645 kubeadm.go:597] duration metric: took 9.526804087s to restartPrimaryControlPlane
	I0725 18:49:51.513823   59645 kubeadm.go:394] duration metric: took 9.576855212s to StartCluster
	I0725 18:49:51.513842   59645 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.513969   59645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:49:51.515531   59645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:49:51.515761   59645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.221 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:49:51.515843   59645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:49:51.515951   59645 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515975   59645 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.515983   59645 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.515995   59645 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:49:51.516017   59645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-600433"
	I0725 18:49:51.516024   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516025   59645 config.go:182] Loaded profile config "default-k8s-diff-port-600433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:49:51.516022   59645 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-600433"
	I0725 18:49:51.516103   59645 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.516123   59645 addons.go:243] addon metrics-server should already be in state true
	I0725 18:49:51.516202   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.516314   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516361   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516365   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516386   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.516636   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.516713   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.517682   59645 out.go:177] * Verifying Kubernetes components...
	I0725 18:49:51.519072   59645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:51.530909   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0725 18:49:51.531207   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0725 18:49:51.531391   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531704   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.531952   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.531978   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532148   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.532169   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.532291   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.532474   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.532501   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.533028   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.533069   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.534984   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0725 18:49:51.535323   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.535729   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.535749   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.536027   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.536055   59645 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-600433"
	W0725 18:49:51.536077   59645 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:49:51.536103   59645 host.go:66] Checking if "default-k8s-diff-port-600433" exists ...
	I0725 18:49:51.536463   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536491   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.536518   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.536562   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.548458   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0725 18:49:51.548987   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.549539   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.549563   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.549880   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.550016   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0725 18:49:51.550105   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.550366   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.550862   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.550897   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43569
	I0725 18:49:51.550975   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551220   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.551462   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.551708   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.551727   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.551748   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.551768   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.552170   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.552745   59645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:51.552787   59645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:51.553221   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.554936   59645 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:49:51.556152   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:49:51.556166   59645 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:49:51.556184   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.556202   59645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:49:51.557826   59645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.557870   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:49:51.557892   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.558763   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559109   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.559126   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.559255   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.559402   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.559522   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.559637   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.560776   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561142   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.561169   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.561285   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.561462   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.561624   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.561769   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.572412   59645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I0725 18:49:51.572773   59645 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:51.573256   59645 main.go:141] libmachine: Using API Version  1
	I0725 18:49:51.573269   59645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:51.573596   59645 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:51.573793   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetState
	I0725 18:49:51.575260   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .DriverName
	I0725 18:49:51.575503   59645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.575513   59645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:49:51.575523   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHHostname
	I0725 18:49:51.577887   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578208   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:68", ip: ""} in network mk-default-k8s-diff-port-600433: {Iface:virbr2 ExpiryTime:2024-07-25 19:41:20 +0000 UTC Type:0 Mac:52:54:00:ee:71:68 Iaid: IPaddr:192.168.50.221 Prefix:24 Hostname:default-k8s-diff-port-600433 Clientid:01:52:54:00:ee:71:68}
	I0725 18:49:51.578228   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | domain default-k8s-diff-port-600433 has defined IP address 192.168.50.221 and MAC address 52:54:00:ee:71:68 in network mk-default-k8s-diff-port-600433
	I0725 18:49:51.578339   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHPort
	I0725 18:49:51.578496   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHKeyPath
	I0725 18:49:51.578651   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .GetSSHUsername
	I0725 18:49:51.578775   59645 sshutil.go:53] new ssh client: &{IP:192.168.50.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/default-k8s-diff-port-600433/id_rsa Username:docker}
	I0725 18:49:51.710511   59645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:49:51.728187   59645 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:51.810767   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:49:51.810801   59645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:49:51.822774   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:49:51.828890   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:49:51.841308   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:49:51.841332   59645 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:49:51.864965   59645 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:51.864991   59645 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:49:51.910359   59645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:49:52.699480   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699512   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699488   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699573   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699812   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699829   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699839   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699848   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.699893   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.699926   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.699940   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.699956   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.699968   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.700056   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700086   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700202   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.700218   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.700248   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.704859   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.704873   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.705126   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.705144   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.794977   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795000   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795318   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795339   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795341   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) DBG | Closing plugin on server side
	I0725 18:49:52.795346   59645 main.go:141] libmachine: Making call to close driver server
	I0725 18:49:52.795360   59645 main.go:141] libmachine: (default-k8s-diff-port-600433) Calling .Close
	I0725 18:49:52.795632   59645 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:49:52.795657   59645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:49:52.795668   59645 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-600433"
	I0725 18:49:52.797643   59645 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:49:52.798886   59645 addons.go:510] duration metric: took 1.283046902s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:49:53.731631   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:51.964707   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:51.965228   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | unable to find current IP address of domain old-k8s-version-108542 in network mk-old-k8s-version-108542
	I0725 18:49:51.965263   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | I0725 18:49:51.965151   61209 retry.go:31] will retry after 3.053047755s: waiting for machine to come up
	I0725 18:49:55.022350   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022815   60176 main.go:141] libmachine: (old-k8s-version-108542) Found IP for machine: 192.168.39.29
	I0725 18:49:55.022846   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has current primary IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.022858   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserving static IP address...
	I0725 18:49:55.023277   60176 main.go:141] libmachine: (old-k8s-version-108542) Reserved static IP address: 192.168.39.29
	I0725 18:49:55.023333   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.023342   60176 main.go:141] libmachine: (old-k8s-version-108542) Waiting for SSH to be available...
	I0725 18:49:55.023394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | skip adding static IP to network mk-old-k8s-version-108542 - found existing host DHCP lease matching {name: "old-k8s-version-108542", mac: "52:54:00:19:68:38", ip: "192.168.39.29"}
	I0725 18:49:55.023425   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Getting to WaitForSSH function...
	I0725 18:49:55.025250   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025544   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.025574   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.025668   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH client type: external
	I0725 18:49:55.025699   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa (-rw-------)
	I0725 18:49:55.025731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:49:55.025753   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | About to run SSH command:
	I0725 18:49:55.025770   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | exit 0
	I0725 18:49:55.152309   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | SSH cmd err, output: <nil>: 
	I0725 18:49:55.152720   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetConfigRaw
	I0725 18:49:55.153338   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.155460   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155731   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.155755   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.155969   60176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/config.json ...
	I0725 18:49:55.156128   60176 machine.go:94] provisionDockerMachine start ...
	I0725 18:49:55.156143   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:55.156307   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.158465   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.158795   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.158827   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.159012   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.159174   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159366   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.159512   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.159688   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.159902   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.159914   60176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:49:55.268422   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:49:55.268446   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268707   60176 buildroot.go:166] provisioning hostname "old-k8s-version-108542"
	I0725 18:49:55.268732   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.268931   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.271599   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.271913   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.271949   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.272120   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.272285   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272490   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.272657   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.272830   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.273003   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.273017   60176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-108542 && echo "old-k8s-version-108542" | sudo tee /etc/hostname
	I0725 18:49:55.398261   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-108542
	
	I0725 18:49:55.398291   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.401090   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.401517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.401669   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.401870   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402026   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.402182   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.402380   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.402621   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.402648   60176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-108542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-108542/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-108542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:49:55.523079   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:49:55.523115   60176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:49:55.523147   60176 buildroot.go:174] setting up certificates
	I0725 18:49:55.523156   60176 provision.go:84] configureAuth start
	I0725 18:49:55.523165   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetMachineName
	I0725 18:49:55.523486   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:55.526235   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526644   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.526675   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.526875   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.529466   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.529836   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.529865   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.530004   60176 provision.go:143] copyHostCerts
	I0725 18:49:55.530058   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:49:55.530068   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:49:55.530113   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:49:55.530198   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:49:55.530205   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:49:55.530225   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:49:55.530386   60176 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:49:55.530401   60176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:49:55.530426   60176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:49:55.530494   60176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-108542 san=[127.0.0.1 192.168.39.29 localhost minikube old-k8s-version-108542]
	I0725 18:49:55.740503   60176 provision.go:177] copyRemoteCerts
	I0725 18:49:55.740561   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:49:55.740585   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.743257   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743582   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.743615   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.743798   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.743997   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.744160   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.744312   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:55.825771   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:49:55.847516   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 18:49:55.869368   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:49:55.893223   60176 provision.go:87] duration metric: took 370.054854ms to configureAuth
	I0725 18:49:55.893255   60176 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:49:55.893425   60176 config.go:182] Loaded profile config "old-k8s-version-108542": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:49:55.893500   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:55.896394   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896703   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:55.896758   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:55.896962   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:55.897161   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897431   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:55.897631   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:55.897855   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:55.898023   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:55.898036   60176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:49:56.181257   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:49:56.181300   60176 machine.go:97] duration metric: took 1.025159397s to provisionDockerMachine
	I0725 18:49:56.181315   60176 start.go:293] postStartSetup for "old-k8s-version-108542" (driver="kvm2")
	I0725 18:49:56.181334   60176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:49:56.181353   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.181666   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:49:56.181688   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.184354   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184695   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.184718   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.184851   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.185034   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.185185   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.185308   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.266683   60176 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:49:56.270387   60176 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:49:56.270407   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:49:56.270474   60176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:49:56.270559   60176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:49:56.270668   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:49:56.279276   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:49:56.302444   60176 start.go:296] duration metric: took 121.115308ms for postStartSetup
	I0725 18:49:56.302497   60176 fix.go:56] duration metric: took 20.26546429s for fixHost
	I0725 18:49:56.302517   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.305136   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305488   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.305517   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.305706   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.305922   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306074   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.306193   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.306317   60176 main.go:141] libmachine: Using SSH client type: native
	I0725 18:49:56.306502   60176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.29 22 <nil> <nil>}
	I0725 18:49:56.306514   60176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:49:56.412717   60732 start.go:364] duration metric: took 2m4.976127328s to acquireMachinesLock for "embed-certs-646344"
	I0725 18:49:56.412771   60732 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:49:56.412782   60732 fix.go:54] fixHost starting: 
	I0725 18:49:56.413158   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:49:56.413188   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:49:56.432299   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0725 18:49:56.432712   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:49:56.433231   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:49:56.433260   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:49:56.433647   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:49:56.433868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:49:56.434040   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:49:56.435582   60732 fix.go:112] recreateIfNeeded on embed-certs-646344: state=Stopped err=<nil>
	I0725 18:49:56.435617   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	W0725 18:49:56.435793   60732 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:49:56.437567   60732 out.go:177] * Restarting existing kvm2 VM for "embed-certs-646344" ...
	I0725 18:49:56.412575   60176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933396.389223979
	
	I0725 18:49:56.412602   60176 fix.go:216] guest clock: 1721933396.389223979
	I0725 18:49:56.412612   60176 fix.go:229] Guest: 2024-07-25 18:49:56.389223979 +0000 UTC Remote: 2024-07-25 18:49:56.302501019 +0000 UTC m=+249.953644815 (delta=86.72296ms)
	I0725 18:49:56.412634   60176 fix.go:200] guest clock delta is within tolerance: 86.72296ms
	I0725 18:49:56.412639   60176 start.go:83] releasing machines lock for "old-k8s-version-108542", held for 20.375658703s
	I0725 18:49:56.412668   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.412935   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:56.415814   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416191   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.416219   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.416398   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.416862   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417065   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .DriverName
	I0725 18:49:56.417160   60176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:49:56.417201   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.417309   60176 ssh_runner.go:195] Run: cat /version.json
	I0725 18:49:56.417329   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHHostname
	I0725 18:49:56.420122   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420371   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420526   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420550   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420682   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.420816   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:56.420846   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.420850   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:56.420984   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHPort
	I0725 18:49:56.421058   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421126   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHKeyPath
	I0725 18:49:56.421198   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.421272   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetSSHUsername
	I0725 18:49:56.421418   60176 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/old-k8s-version-108542/id_rsa Username:docker}
	I0725 18:49:56.529391   60176 ssh_runner.go:195] Run: systemctl --version
	I0725 18:49:56.535114   60176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:49:56.674979   60176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:49:56.681160   60176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:49:56.681260   60176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:49:56.696192   60176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:49:56.696215   60176 start.go:495] detecting cgroup driver to use...
	I0725 18:49:56.696309   60176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:49:56.713088   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:49:56.727033   60176 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:49:56.727095   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:49:56.742008   60176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:49:56.756146   60176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:49:56.884075   60176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:49:57.051613   60176 docker.go:233] disabling docker service ...
	I0725 18:49:57.051742   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:49:57.068011   60176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:49:57.082300   60176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:49:57.208673   60176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:49:57.372393   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:49:57.397281   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:49:57.418913   60176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0725 18:49:57.418978   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.429833   60176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:49:57.429909   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.440717   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.451076   60176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:49:57.465052   60176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:49:57.476592   60176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:49:57.487164   60176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:49:57.487225   60176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:49:57.501748   60176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:49:57.514743   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:49:57.658648   60176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:49:57.811455   60176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:49:57.811534   60176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:49:57.816193   60176 start.go:563] Will wait 60s for crictl version
	I0725 18:49:57.816267   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:49:57.819557   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:49:57.854511   60176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:49:57.854594   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.881542   60176 ssh_runner.go:195] Run: crio --version
	I0725 18:49:57.910664   60176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0725 18:49:55.733934   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:58.232025   59645 node_ready.go:53] node "default-k8s-diff-port-600433" has status "Ready":"False"
	I0725 18:49:56.438776   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Start
	I0725 18:49:56.438950   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring networks are active...
	I0725 18:49:56.439813   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network default is active
	I0725 18:49:56.440144   60732 main.go:141] libmachine: (embed-certs-646344) Ensuring network mk-embed-certs-646344 is active
	I0725 18:49:56.440644   60732 main.go:141] libmachine: (embed-certs-646344) Getting domain xml...
	I0725 18:49:56.441344   60732 main.go:141] libmachine: (embed-certs-646344) Creating domain...
	I0725 18:49:57.747307   60732 main.go:141] libmachine: (embed-certs-646344) Waiting to get IP...
	I0725 18:49:57.748364   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.748801   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.748852   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.748752   61389 retry.go:31] will retry after 207.883752ms: waiting for machine to come up
	I0725 18:49:57.958328   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:57.958813   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:57.958837   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:57.958773   61389 retry.go:31] will retry after 256.983672ms: waiting for machine to come up
	I0725 18:49:58.217316   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.217798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.217858   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.217760   61389 retry.go:31] will retry after 427.650618ms: waiting for machine to come up
	I0725 18:49:58.647668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:58.648053   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:58.648088   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:58.648021   61389 retry.go:31] will retry after 585.454725ms: waiting for machine to come up
	I0725 18:49:59.235003   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.235582   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.235612   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.235535   61389 retry.go:31] will retry after 477.660763ms: waiting for machine to come up
	I0725 18:49:59.715182   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:49:59.715675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:49:59.715706   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:49:59.715628   61389 retry.go:31] will retry after 775.403931ms: waiting for machine to come up
	I0725 18:50:00.492798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:00.493211   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:00.493239   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:00.493160   61389 retry.go:31] will retry after 1.086502086s: waiting for machine to come up
	I0725 18:49:57.912004   60176 main.go:141] libmachine: (old-k8s-version-108542) Calling .GetIP
	I0725 18:49:57.914958   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915429   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:68:38", ip: ""} in network mk-old-k8s-version-108542: {Iface:virbr3 ExpiryTime:2024-07-25 19:39:50 +0000 UTC Type:0 Mac:52:54:00:19:68:38 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:old-k8s-version-108542 Clientid:01:52:54:00:19:68:38}
	I0725 18:49:57.915462   60176 main.go:141] libmachine: (old-k8s-version-108542) DBG | domain old-k8s-version-108542 has defined IP address 192.168.39.29 and MAC address 52:54:00:19:68:38 in network mk-old-k8s-version-108542
	I0725 18:49:57.915628   60176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0725 18:49:57.919685   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:49:57.932248   60176 kubeadm.go:883] updating cluster {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:49:57.932392   60176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 18:49:57.932440   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:49:57.982230   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:49:57.982305   60176 ssh_runner.go:195] Run: which lz4
	I0725 18:49:57.986657   60176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:49:57.990932   60176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:49:57.990956   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0725 18:49:59.415735   60176 crio.go:462] duration metric: took 1.429111358s to copy over tarball
	I0725 18:49:59.415800   60176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:49:59.234882   59645 node_ready.go:49] node "default-k8s-diff-port-600433" has status "Ready":"True"
	I0725 18:49:59.234909   59645 node_ready.go:38] duration metric: took 7.506682834s for node "default-k8s-diff-port-600433" to be "Ready" ...
	I0725 18:49:59.234921   59645 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:49:59.243034   59645 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.249940   59645 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace has status "Ready":"True"
	I0725 18:49:59.250024   59645 pod_ready.go:81] duration metric: took 6.957177ms for pod "coredns-7db6d8ff4d-mfjzs" in "kube-system" namespace to be "Ready" ...
	I0725 18:49:59.250051   59645 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.258057   59645 pod_ready.go:102] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:01.757802   59645 pod_ready.go:92] pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.757828   59645 pod_ready.go:81] duration metric: took 2.50775832s for pod "etcd-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.757840   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762837   59645 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.762862   59645 pod_ready.go:81] duration metric: took 5.014715ms for pod "kube-apiserver-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.762874   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768001   59645 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.768027   59645 pod_ready.go:81] duration metric: took 5.144999ms for pod "kube-controller-manager-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.768039   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772551   59645 pod_ready.go:92] pod "kube-proxy-smhmv" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:01.772574   59645 pod_ready.go:81] duration metric: took 4.526528ms for pod "kube-proxy-smhmv" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.772585   59645 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:01.580990   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:01.581438   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:01.581464   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:01.581397   61389 retry.go:31] will retry after 1.452798696s: waiting for machine to come up
	I0725 18:50:03.036272   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:03.036730   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:03.036766   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:03.036682   61389 retry.go:31] will retry after 1.667137658s: waiting for machine to come up
	I0725 18:50:04.705567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:04.705992   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:04.706019   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:04.705958   61389 retry.go:31] will retry after 2.010863389s: waiting for machine to come up
	I0725 18:50:02.370917   60176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.955090558s)
	I0725 18:50:02.370951   60176 crio.go:469] duration metric: took 2.955186203s to extract the tarball
	I0725 18:50:02.370960   60176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:02.411686   60176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:02.448550   60176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0725 18:50:02.448575   60176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:02.448653   60176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0725 18:50:02.448657   60176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.448722   60176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.448739   60176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.448661   60176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.448675   60176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450195   60176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.450213   60176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0725 18:50:02.450237   60176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.450206   60176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.450335   60176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.450375   60176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:02.450489   60176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.711747   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.718711   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0725 18:50:02.721465   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.721473   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.728447   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.745432   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.745791   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.776147   60176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0725 18:50:02.776200   60176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.776245   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.857374   60176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0725 18:50:02.857423   60176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0725 18:50:02.857486   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.876850   60176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0725 18:50:02.876897   60176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.876922   60176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0725 18:50:02.876963   60176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.876974   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877024   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.877044   60176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0725 18:50:02.877071   60176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.877107   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.896960   60176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0725 18:50:02.897008   60176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.897011   60176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0725 18:50:02.897042   60176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:02.897053   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897061   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0725 18:50:02.897083   60176 ssh_runner.go:195] Run: which crictl
	I0725 18:50:02.897120   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0725 18:50:02.897148   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0725 18:50:02.897196   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0725 18:50:02.897248   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0725 18:50:02.992459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0725 18:50:02.992499   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0725 18:50:03.005360   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0725 18:50:03.005381   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0725 18:50:03.005435   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0725 18:50:03.005459   60176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0725 18:50:03.005503   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0725 18:50:03.042218   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0725 18:50:03.054960   60176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0725 18:50:03.279419   60176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:03.416646   60176 cache_images.go:92] duration metric: took 968.05409ms to LoadCachedImages
	W0725 18:50:03.416750   60176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0725 18:50:03.416767   60176 kubeadm.go:934] updating node { 192.168.39.29 8443 v1.20.0 crio true true} ...
	I0725 18:50:03.416896   60176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-108542 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:03.416979   60176 ssh_runner.go:195] Run: crio config
	I0725 18:50:03.470581   60176 cni.go:84] Creating CNI manager for ""
	I0725 18:50:03.470611   60176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:03.470627   60176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:03.470647   60176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.29 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-108542 NodeName:old-k8s-version-108542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 18:50:03.470772   60176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-108542"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:03.470828   60176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 18:50:03.481757   60176 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:03.481839   60176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:03.494342   60176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0725 18:50:03.511779   60176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:03.532137   60176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0725 18:50:03.551049   60176 ssh_runner.go:195] Run: grep 192.168.39.29	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:03.554903   60176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:03.566677   60176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:03.687540   60176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:03.710900   60176 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542 for IP: 192.168.39.29
	I0725 18:50:03.710922   60176 certs.go:194] generating shared ca certs ...
	I0725 18:50:03.710937   60176 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:03.711088   60176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:03.711126   60176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:03.711132   60176 certs.go:256] generating profile certs ...
	I0725 18:50:03.711231   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.key
	I0725 18:50:03.711282   60176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key.da8b5ed0
	I0725 18:50:03.711315   60176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key
	I0725 18:50:03.711420   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:03.711449   60176 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:03.711458   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:03.711479   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:03.711499   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:03.711520   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:03.711562   60176 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:03.712203   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:03.762265   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:03.804226   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:03.840167   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:03.868353   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 18:50:03.893425   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:03.917266   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:03.946205   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:03.974128   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:04.001887   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:04.026495   60176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:04.049083   60176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:04.065407   60176 ssh_runner.go:195] Run: openssl version
	I0725 18:50:04.071064   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:04.082038   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086705   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.086760   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:04.092445   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:04.103129   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:04.113789   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118390   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.118467   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:04.123884   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:04.134230   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:04.144372   60176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148559   60176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.148620   60176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:04.153744   60176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:04.163757   60176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:04.167873   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:04.173706   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:04.179385   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:04.185222   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:04.190716   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:04.196938   60176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:04.202361   60176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-108542 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-108542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.29 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:04.202447   60176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:04.202505   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.243628   60176 cri.go:89] found id: ""
	I0725 18:50:04.243703   60176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:04.253768   60176 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:04.253788   60176 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:04.253841   60176 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:04.264596   60176 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:04.265990   60176 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-108542" does not appear in /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:04.266997   60176 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-5877/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-108542" cluster setting kubeconfig missing "old-k8s-version-108542" context setting]
	I0725 18:50:04.268480   60176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:04.388386   60176 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:04.398469   60176 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.29
	I0725 18:50:04.398517   60176 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:04.398530   60176 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:04.398590   60176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:04.434823   60176 cri.go:89] found id: ""
	I0725 18:50:04.434906   60176 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:04.453378   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:04.463520   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:04.463559   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:04.463611   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:04.473075   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:04.473138   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:04.482881   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:04.494801   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:04.494875   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:04.507011   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.516433   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:04.516505   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:04.528076   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:04.537505   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:04.537572   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:04.547429   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:04.556717   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:04.754947   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.606839   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.850150   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:05.957944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:06.039317   60176 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:06.039436   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:04.245768   59645 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:05.780345   59645 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:05.780380   59645 pod_ready.go:81] duration metric: took 4.007784646s for pod "kube-scheduler-default-k8s-diff-port-600433" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:05.780395   59645 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:07.787259   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:06.718406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:06.718961   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:06.718995   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:06.718902   61389 retry.go:31] will retry after 2.686345537s: waiting for machine to come up
	I0725 18:50:09.406854   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:09.407346   60732 main.go:141] libmachine: (embed-certs-646344) DBG | unable to find current IP address of domain embed-certs-646344 in network mk-embed-certs-646344
	I0725 18:50:09.407388   60732 main.go:141] libmachine: (embed-certs-646344) DBG | I0725 18:50:09.407313   61389 retry.go:31] will retry after 3.432781605s: waiting for machine to come up
	I0725 18:50:06.539802   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:07.539809   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:08.539594   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.040315   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:09.539830   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.039578   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.539828   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:11.039598   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:10.285959   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:12.287101   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:14.181127   59378 start.go:364] duration metric: took 53.405056746s to acquireMachinesLock for "no-preload-371663"
	I0725 18:50:14.181178   59378 start.go:96] Skipping create...Using existing machine configuration
	I0725 18:50:14.181187   59378 fix.go:54] fixHost starting: 
	I0725 18:50:14.181648   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:14.181689   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:14.198182   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0725 18:50:14.198640   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:14.199151   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:14.199176   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:14.199619   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:14.199815   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:14.199945   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:14.201475   59378 fix.go:112] recreateIfNeeded on no-preload-371663: state=Stopped err=<nil>
	I0725 18:50:14.201496   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	W0725 18:50:14.201653   59378 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 18:50:14.203496   59378 out.go:177] * Restarting existing kvm2 VM for "no-preload-371663" ...
	I0725 18:50:12.841703   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842187   60732 main.go:141] libmachine: (embed-certs-646344) Found IP for machine: 192.168.61.133
	I0725 18:50:12.842222   60732 main.go:141] libmachine: (embed-certs-646344) Reserving static IP address...
	I0725 18:50:12.842234   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has current primary IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.842625   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.842650   60732 main.go:141] libmachine: (embed-certs-646344) DBG | skip adding static IP to network mk-embed-certs-646344 - found existing host DHCP lease matching {name: "embed-certs-646344", mac: "52:54:00:59:67:ef", ip: "192.168.61.133"}
	I0725 18:50:12.842660   60732 main.go:141] libmachine: (embed-certs-646344) Reserved static IP address: 192.168.61.133
	I0725 18:50:12.842671   60732 main.go:141] libmachine: (embed-certs-646344) Waiting for SSH to be available...
	I0725 18:50:12.842684   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Getting to WaitForSSH function...
	I0725 18:50:12.844916   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845214   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.845237   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.845372   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH client type: external
	I0725 18:50:12.845406   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa (-rw-------)
	I0725 18:50:12.845474   60732 main.go:141] libmachine: (embed-certs-646344) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:12.845498   60732 main.go:141] libmachine: (embed-certs-646344) DBG | About to run SSH command:
	I0725 18:50:12.845528   60732 main.go:141] libmachine: (embed-certs-646344) DBG | exit 0
	I0725 18:50:12.968383   60732 main.go:141] libmachine: (embed-certs-646344) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:12.968690   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetConfigRaw
	I0725 18:50:12.969249   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:12.971567   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972072   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.972102   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.972338   60732 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/config.json ...
	I0725 18:50:12.972526   60732 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:12.972544   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:12.972739   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:12.974938   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975308   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:12.975336   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:12.975462   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:12.975671   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.975831   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:12.976010   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:12.976184   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:12.976414   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:12.976428   60732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:13.076310   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:13.076369   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076609   60732 buildroot.go:166] provisioning hostname "embed-certs-646344"
	I0725 18:50:13.076637   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.076830   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.079542   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.079895   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.079923   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.080050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.080232   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080385   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.080530   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.080722   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.080917   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.080935   60732 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-646344 && echo "embed-certs-646344" | sudo tee /etc/hostname
	I0725 18:50:13.193782   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-646344
	
	I0725 18:50:13.193814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.196822   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197149   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.197192   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.197367   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.197581   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197772   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.197906   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.198079   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.198292   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.198315   60732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-646344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-646344/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-646344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:13.313070   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:13.313098   60732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:13.313146   60732 buildroot.go:174] setting up certificates
	I0725 18:50:13.313161   60732 provision.go:84] configureAuth start
	I0725 18:50:13.313176   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetMachineName
	I0725 18:50:13.313457   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:13.316245   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316666   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.316695   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.316814   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.319178   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319516   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.319540   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.319697   60732 provision.go:143] copyHostCerts
	I0725 18:50:13.319751   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:13.319763   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:13.319816   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:13.319900   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:13.319908   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:13.319929   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:13.319981   60732 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:13.319988   60732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:13.320004   60732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:13.320051   60732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.embed-certs-646344 san=[127.0.0.1 192.168.61.133 embed-certs-646344 localhost minikube]
	I0725 18:50:13.540822   60732 provision.go:177] copyRemoteCerts
	I0725 18:50:13.540881   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:13.540903   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.543520   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.543805   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.543855   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.544013   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.544227   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.544450   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.544649   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:13.629982   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:13.652453   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:13.674398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:50:13.698302   60732 provision.go:87] duration metric: took 385.127611ms to configureAuth
	I0725 18:50:13.698329   60732 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:13.698501   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:13.698574   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.701274   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701675   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.701702   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.701850   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.702049   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.702345   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.702510   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:13.702699   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:13.702720   60732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:13.954912   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:13.954942   60732 machine.go:97] duration metric: took 982.402505ms to provisionDockerMachine
	I0725 18:50:13.954953   60732 start.go:293] postStartSetup for "embed-certs-646344" (driver="kvm2")
	I0725 18:50:13.954963   60732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:13.954978   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:13.955269   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:13.955301   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:13.957946   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958309   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:13.958332   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:13.958459   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:13.958663   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:13.958805   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:13.959017   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.039361   60732 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:14.043389   60732 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:14.043416   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:14.043488   60732 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:14.043588   60732 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:14.043686   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:14.053277   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:14.075725   60732 start.go:296] duration metric: took 120.758673ms for postStartSetup
	I0725 18:50:14.075772   60732 fix.go:56] duration metric: took 17.662990552s for fixHost
	I0725 18:50:14.075795   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.078338   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078728   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.078782   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.078932   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.079187   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079393   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.079562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.079763   60732 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:14.080049   60732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.133 22 <nil> <nil>}
	I0725 18:50:14.080068   60732 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:14.180948   60732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933414.131955665
	
	I0725 18:50:14.180974   60732 fix.go:216] guest clock: 1721933414.131955665
	I0725 18:50:14.180983   60732 fix.go:229] Guest: 2024-07-25 18:50:14.131955665 +0000 UTC Remote: 2024-07-25 18:50:14.075776451 +0000 UTC m=+142.772748611 (delta=56.179214ms)
	I0725 18:50:14.181032   60732 fix.go:200] guest clock delta is within tolerance: 56.179214ms
	I0725 18:50:14.181038   60732 start.go:83] releasing machines lock for "embed-certs-646344", held for 17.768291807s
	I0725 18:50:14.181069   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.181338   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:14.183693   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184035   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.184065   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.184195   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184748   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.184936   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:14.185004   60732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:14.185050   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.185172   60732 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:14.185203   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:14.187720   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188004   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188071   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188095   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188216   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188367   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:14.188393   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:14.188397   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188555   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.188567   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:14.188738   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:14.188757   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.188868   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:14.189001   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:14.270424   60732 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:14.322480   60732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:14.468034   60732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:14.474022   60732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:14.474090   60732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:14.494765   60732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:14.494793   60732 start.go:495] detecting cgroup driver to use...
	I0725 18:50:14.494862   60732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:14.515047   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:14.531708   60732 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:14.531773   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:14.546508   60732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:14.560878   60732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:14.681034   60732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:14.830960   60732 docker.go:233] disabling docker service ...
	I0725 18:50:14.831032   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:14.853115   60732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:14.869852   60732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:14.995284   60732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:15.109759   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:15.123118   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:15.140723   60732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0725 18:50:15.140792   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.150912   60732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:15.150968   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.161603   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.173509   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.183857   60732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:15.195023   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.207216   60732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.223821   60732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:15.234472   60732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:15.243979   60732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:15.244032   60732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:15.256791   60732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:15.268608   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:15.396398   60732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:15.528593   60732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:15.528659   60732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:15.534218   60732 start.go:563] Will wait 60s for crictl version
	I0725 18:50:15.534288   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:50:15.537933   60732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:15.583719   60732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:15.583824   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.613123   60732 ssh_runner.go:195] Run: crio --version
	I0725 18:50:15.643327   60732 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0725 18:50:14.204765   59378 main.go:141] libmachine: (no-preload-371663) Calling .Start
	I0725 18:50:14.204935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring networks are active...
	I0725 18:50:14.205596   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network default is active
	I0725 18:50:14.205935   59378 main.go:141] libmachine: (no-preload-371663) Ensuring network mk-no-preload-371663 is active
	I0725 18:50:14.206473   59378 main.go:141] libmachine: (no-preload-371663) Getting domain xml...
	I0725 18:50:14.207048   59378 main.go:141] libmachine: (no-preload-371663) Creating domain...
	I0725 18:50:15.487909   59378 main.go:141] libmachine: (no-preload-371663) Waiting to get IP...
	I0725 18:50:15.488775   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.489188   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.489244   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.489164   61562 retry.go:31] will retry after 288.758246ms: waiting for machine to come up
	I0725 18:50:15.779810   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:15.780284   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:15.780346   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:15.780234   61562 retry.go:31] will retry after 255.724346ms: waiting for machine to come up
	I0725 18:50:15.644608   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetIP
	I0725 18:50:15.647958   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648356   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:15.648386   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:15.648602   60732 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:15.652342   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:15.664409   60732 kubeadm.go:883] updating cluster {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:15.664587   60732 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 18:50:15.664658   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:15.701646   60732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0725 18:50:15.701703   60732 ssh_runner.go:195] Run: which lz4
	I0725 18:50:15.705629   60732 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 18:50:15.709366   60732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 18:50:15.709398   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0725 18:50:11.540367   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.040178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:12.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.039929   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:13.540517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.040281   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.540287   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.039549   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:15.540265   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:16.039520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:14.828431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:17.287944   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:16.037762   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.038357   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.038391   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.038313   61562 retry.go:31] will retry after 486.960289ms: waiting for machine to come up
	I0725 18:50:16.527269   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.527868   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.527899   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.527826   61562 retry.go:31] will retry after 389.104399ms: waiting for machine to come up
	I0725 18:50:16.918319   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:16.918911   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:16.918945   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:16.918854   61562 retry.go:31] will retry after 690.549271ms: waiting for machine to come up
	I0725 18:50:17.610632   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:17.611242   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:17.611269   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:17.611199   61562 retry.go:31] will retry after 753.624655ms: waiting for machine to come up
	I0725 18:50:18.366551   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:18.367078   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:18.367119   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:18.367022   61562 retry.go:31] will retry after 1.115992813s: waiting for machine to come up
	I0725 18:50:19.484121   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:19.484611   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:19.484641   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:19.484556   61562 retry.go:31] will retry after 1.306583093s: waiting for machine to come up
	I0725 18:50:20.793118   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:20.793603   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:20.793630   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:20.793548   61562 retry.go:31] will retry after 1.175948199s: waiting for machine to come up
	I0725 18:50:17.015043   60732 crio.go:462] duration metric: took 1.309449954s to copy over tarball
	I0725 18:50:17.015143   60732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 18:50:19.256777   60732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.241585619s)
	I0725 18:50:19.256816   60732 crio.go:469] duration metric: took 2.241743782s to extract the tarball
	I0725 18:50:19.256825   60732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 18:50:19.293259   60732 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:19.346692   60732 crio.go:514] all images are preloaded for cri-o runtime.
	I0725 18:50:19.346714   60732 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:50:19.346722   60732 kubeadm.go:934] updating node { 192.168.61.133 8443 v1.30.3 crio true true} ...
	I0725 18:50:19.346822   60732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-646344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:19.346884   60732 ssh_runner.go:195] Run: crio config
	I0725 18:50:19.391246   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:19.391272   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:19.391287   60732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:19.391320   60732 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.133 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-646344 NodeName:embed-certs-646344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:19.391518   60732 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-646344"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:19.391597   60732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:50:19.401672   60732 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:19.401743   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:19.410693   60732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0725 18:50:19.428155   60732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:50:19.443819   60732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0725 18:50:19.461139   60732 ssh_runner.go:195] Run: grep 192.168.61.133	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:19.465121   60732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:19.478939   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:19.593175   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:19.609679   60732 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344 for IP: 192.168.61.133
	I0725 18:50:19.609705   60732 certs.go:194] generating shared ca certs ...
	I0725 18:50:19.609726   60732 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:19.609918   60732 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:19.609976   60732 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:19.609989   60732 certs.go:256] generating profile certs ...
	I0725 18:50:19.610096   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/client.key
	I0725 18:50:19.610176   60732 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key.b1982a11
	I0725 18:50:19.610227   60732 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key
	I0725 18:50:19.610380   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:19.610424   60732 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:19.610436   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:19.610467   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:19.610490   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:19.610518   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:19.610575   60732 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:19.611227   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:19.647448   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:19.679186   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:19.703996   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:19.731396   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 18:50:19.759550   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 18:50:19.795812   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:19.818419   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/embed-certs-646344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 18:50:19.840831   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:19.862271   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:19.886159   60732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:19.910827   60732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:19.926056   60732 ssh_runner.go:195] Run: openssl version
	I0725 18:50:19.931721   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:19.942217   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946261   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.946324   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:19.951695   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:19.961642   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:19.971592   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975615   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.975671   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:19.980904   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:19.991023   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:20.001258   60732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005322   60732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.005398   60732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:20.010666   60732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:20.021300   60732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:20.025462   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:20.031181   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:20.037216   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:20.043670   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:20.051210   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:20.057316   60732 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:20.062598   60732 kubeadm.go:392] StartCluster: {Name:embed-certs-646344 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-646344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:20.062719   60732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:20.062793   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.098154   60732 cri.go:89] found id: ""
	I0725 18:50:20.098229   60732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:20.107991   60732 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:20.108017   60732 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:20.108066   60732 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:20.117394   60732 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:20.118456   60732 kubeconfig.go:125] found "embed-certs-646344" server: "https://192.168.61.133:8443"
	I0725 18:50:20.120660   60732 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:20.129546   60732 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.133
	I0725 18:50:20.129576   60732 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:20.129589   60732 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:20.129645   60732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:20.162792   60732 cri.go:89] found id: ""
	I0725 18:50:20.162883   60732 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:20.178972   60732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:20.187981   60732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:20.188005   60732 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:20.188060   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:20.197371   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:20.197429   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:20.206704   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:20.215394   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:20.215459   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:20.224116   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.232437   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:20.232495   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:20.241577   60732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:20.249916   60732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:20.249976   60732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:20.258838   60732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:20.267902   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:20.380000   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:16.539725   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:17.539756   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.040221   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:18.539666   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.040416   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.540379   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.040257   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:20.540153   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:21.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:19.787705   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:22.230346   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:21.971072   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:21.971517   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:21.971544   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:21.971471   61562 retry.go:31] will retry after 1.926890777s: waiting for machine to come up
	I0725 18:50:23.900824   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:23.901448   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:23.901479   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:23.901397   61562 retry.go:31] will retry after 1.777870483s: waiting for machine to come up
	I0725 18:50:25.681617   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:25.682161   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:25.682190   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:25.682122   61562 retry.go:31] will retry after 2.846649743s: waiting for machine to come up
	I0725 18:50:21.816404   60732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.436368273s)
	I0725 18:50:21.816441   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.014796   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.093533   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:22.201595   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:22.201692   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.702680   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.202769   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.701909   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.720378   60732 api_server.go:72] duration metric: took 1.518780528s to wait for apiserver process to appear ...
	I0725 18:50:23.720468   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:23.720503   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:21.540165   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.039698   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:22.539544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.040164   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:23.539691   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.040229   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:24.540225   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.039517   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:25.540158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.040441   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:26.542598   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:26.542661   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:26.542677   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.653001   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.653044   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:26.721231   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:26.725819   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:26.725851   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.221435   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.226412   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.226452   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:27.720962   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:27.726521   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:27.726550   60732 api_server.go:103] status: https://192.168.61.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:28.221186   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:50:28.225358   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:50:28.232310   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:50:28.232348   60732 api_server.go:131] duration metric: took 4.511861085s to wait for apiserver health ...
	I0725 18:50:28.232359   60732 cni.go:84] Creating CNI manager for ""
	I0725 18:50:28.232368   60732 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:28.234169   60732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:24.287433   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:26.287625   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.287755   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:28.235545   60732 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:28.246029   60732 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:28.265973   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:28.277752   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:28.277791   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:28.277801   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:28.277818   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:28.277830   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:28.277839   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:28.277851   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:28.277861   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:28.277868   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:28.277878   60732 system_pods.go:74] duration metric: took 11.88598ms to wait for pod list to return data ...
	I0725 18:50:28.277895   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:28.282289   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:28.282320   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:28.282335   60732 node_conditions.go:105] duration metric: took 4.431712ms to run NodePressure ...
	I0725 18:50:28.282354   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:28.551353   60732 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557049   60732 kubeadm.go:739] kubelet initialised
	I0725 18:50:28.557074   60732 kubeadm.go:740] duration metric: took 5.692584ms waiting for restarted kubelet to initialise ...
	I0725 18:50:28.557083   60732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:28.564396   60732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.568721   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568745   60732 pod_ready.go:81] duration metric: took 4.325942ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.568755   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.568762   60732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.572373   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572397   60732 pod_ready.go:81] duration metric: took 3.627867ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.572404   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "etcd-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.572411   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.576876   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576897   60732 pod_ready.go:81] duration metric: took 4.478779ms for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.576903   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.576909   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:28.669762   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669788   60732 pod_ready.go:81] duration metric: took 92.870934ms for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:28.669797   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:28.669803   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.069536   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069564   60732 pod_ready.go:81] duration metric: took 399.753713ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.069573   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-proxy-xk2lq" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.069580   60732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.471102   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471130   60732 pod_ready.go:81] duration metric: took 401.542911ms for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.471139   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.471145   60732 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:29.869464   60732 pod_ready.go:97] node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869499   60732 pod_ready.go:81] duration metric: took 398.344638ms for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:29.869511   60732 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-646344" hosting pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:29.869520   60732 pod_ready.go:38] duration metric: took 1.312426343s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:29.869549   60732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:29.881205   60732 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:29.881230   60732 kubeadm.go:597] duration metric: took 9.773206057s to restartPrimaryControlPlane
	I0725 18:50:29.881241   60732 kubeadm.go:394] duration metric: took 9.818649836s to StartCluster
	I0725 18:50:29.881264   60732 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.881348   60732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:29.882924   60732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:29.883197   60732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.133 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:29.883269   60732 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:29.883366   60732 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-646344"
	I0725 18:50:29.883380   60732 addons.go:69] Setting default-storageclass=true in profile "embed-certs-646344"
	I0725 18:50:29.883401   60732 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-646344"
	W0725 18:50:29.883411   60732 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:29.883425   60732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-646344"
	I0725 18:50:29.883419   60732 addons.go:69] Setting metrics-server=true in profile "embed-certs-646344"
	I0725 18:50:29.883444   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883461   60732 addons.go:234] Setting addon metrics-server=true in "embed-certs-646344"
	W0725 18:50:29.883481   60732 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:29.883443   60732 config.go:182] Loaded profile config "embed-certs-646344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:50:29.883512   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.883840   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883870   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883929   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.883969   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.883935   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.884014   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.885204   60732 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:29.886676   60732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:29.899359   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0725 18:50:29.899418   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0725 18:50:29.899865   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900280   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.900493   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900513   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900744   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.900769   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.900850   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901092   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.901288   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.901473   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.901504   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.903520   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0725 18:50:29.903975   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.904512   60732 addons.go:234] Setting addon default-storageclass=true in "embed-certs-646344"
	W0725 18:50:29.904529   60732 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:29.904542   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.904551   60732 host.go:66] Checking if "embed-certs-646344" exists ...
	I0725 18:50:29.904558   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.904830   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.904854   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.904861   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.905388   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.905425   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.917614   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0725 18:50:29.918105   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.918628   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.918660   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.918960   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.919128   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.920885   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.922852   60732 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:29.923872   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0725 18:50:29.923895   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0725 18:50:29.924134   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:29.924148   60732 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:29.924167   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.924376   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924451   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.924817   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924837   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.924970   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.924985   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.925223   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.925473   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.925493   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.926319   60732 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:29.926366   60732 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:29.926970   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.927368   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927798   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.927829   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.927971   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.928192   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.928355   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.928445   60732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:28.529935   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:28.530428   59378 main.go:141] libmachine: (no-preload-371663) DBG | unable to find current IP address of domain no-preload-371663 in network mk-no-preload-371663
	I0725 18:50:28.530449   59378 main.go:141] libmachine: (no-preload-371663) DBG | I0725 18:50:28.530381   61562 retry.go:31] will retry after 2.913225709s: waiting for machine to come up
	I0725 18:50:29.928527   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.929735   60732 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:29.929755   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:29.929770   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.932668   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933040   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.933066   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.933304   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.933499   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.933674   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.933806   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:29.947401   60732 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42223
	I0725 18:50:29.947801   60732 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:29.948222   60732 main.go:141] libmachine: Using API Version  1
	I0725 18:50:29.948249   60732 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:29.948567   60732 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:29.948819   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetState
	I0725 18:50:29.950344   60732 main.go:141] libmachine: (embed-certs-646344) Calling .DriverName
	I0725 18:50:29.950550   60732 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:29.950566   60732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:29.950584   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHHostname
	I0725 18:50:29.953193   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953589   60732 main.go:141] libmachine: (embed-certs-646344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:67:ef", ip: ""} in network mk-embed-certs-646344: {Iface:virbr1 ExpiryTime:2024-07-25 19:50:07 +0000 UTC Type:0 Mac:52:54:00:59:67:ef Iaid: IPaddr:192.168.61.133 Prefix:24 Hostname:embed-certs-646344 Clientid:01:52:54:00:59:67:ef}
	I0725 18:50:29.953618   60732 main.go:141] libmachine: (embed-certs-646344) DBG | domain embed-certs-646344 has defined IP address 192.168.61.133 and MAC address 52:54:00:59:67:ef in network mk-embed-certs-646344
	I0725 18:50:29.953892   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHPort
	I0725 18:50:29.954062   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHKeyPath
	I0725 18:50:29.954224   60732 main.go:141] libmachine: (embed-certs-646344) Calling .GetSSHUsername
	I0725 18:50:29.954348   60732 sshutil.go:53] new ssh client: &{IP:192.168.61.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/embed-certs-646344/id_rsa Username:docker}
	I0725 18:50:30.074297   60732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:30.095138   60732 node_ready.go:35] waiting up to 6m0s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:30.149031   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:30.154470   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:30.247852   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:30.247872   60732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:30.264189   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:30.264220   60732 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:30.282583   60732 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:30.282606   60732 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:30.298927   60732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:31.226498   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.071992912s)
	I0725 18:50:31.226572   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226587   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.226730   60732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.077663797s)
	I0725 18:50:31.226771   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.226782   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227150   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227166   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227166   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227171   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227175   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227183   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227186   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227192   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.227198   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.227217   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227468   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227483   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227495   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.227502   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.227548   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.227556   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.234538   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.234562   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.234822   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.234839   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237597   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237615   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.237853   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.237871   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.237871   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.237879   60732 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:31.237888   60732 main.go:141] libmachine: (embed-certs-646344) Calling .Close
	I0725 18:50:31.238123   60732 main.go:141] libmachine: (embed-certs-646344) DBG | Closing plugin on server side
	I0725 18:50:31.238133   60732 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:31.238144   60732 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:31.238155   60732 addons.go:475] Verifying addon metrics-server=true in "embed-certs-646344"
	I0725 18:50:31.239876   60732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0725 18:50:31.241165   60732 addons.go:510] duration metric: took 1.357900639s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0725 18:50:26.540560   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.039938   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:27.539928   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.039509   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:28.540137   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.040535   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:29.539745   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.039557   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.540254   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:31.040189   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:30.787880   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:33.288654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:31.446688   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has current primary IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.447343   59378 main.go:141] libmachine: (no-preload-371663) Found IP for machine: 192.168.72.62
	I0725 18:50:31.447351   59378 main.go:141] libmachine: (no-preload-371663) Reserving static IP address...
	I0725 18:50:31.447800   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.447831   59378 main.go:141] libmachine: (no-preload-371663) DBG | skip adding static IP to network mk-no-preload-371663 - found existing host DHCP lease matching {name: "no-preload-371663", mac: "52:54:00:dc:2b:39", ip: "192.168.72.62"}
	I0725 18:50:31.447848   59378 main.go:141] libmachine: (no-preload-371663) Reserved static IP address: 192.168.72.62
	I0725 18:50:31.447862   59378 main.go:141] libmachine: (no-preload-371663) Waiting for SSH to be available...
	I0725 18:50:31.447875   59378 main.go:141] libmachine: (no-preload-371663) DBG | Getting to WaitForSSH function...
	I0725 18:50:31.449978   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450325   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.450344   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.450468   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH client type: external
	I0725 18:50:31.450499   59378 main.go:141] libmachine: (no-preload-371663) DBG | Using SSH private key: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa (-rw-------)
	I0725 18:50:31.450530   59378 main.go:141] libmachine: (no-preload-371663) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0725 18:50:31.450547   59378 main.go:141] libmachine: (no-preload-371663) DBG | About to run SSH command:
	I0725 18:50:31.450553   59378 main.go:141] libmachine: (no-preload-371663) DBG | exit 0
	I0725 18:50:31.576105   59378 main.go:141] libmachine: (no-preload-371663) DBG | SSH cmd err, output: <nil>: 
	I0725 18:50:31.576631   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetConfigRaw
	I0725 18:50:31.577245   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.579460   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.579968   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.580003   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.580381   59378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/config.json ...
	I0725 18:50:31.580703   59378 machine.go:94] provisionDockerMachine start ...
	I0725 18:50:31.580728   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:31.580956   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.583261   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583564   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.583592   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.583717   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.583910   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584085   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.584246   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.584476   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.584689   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.584701   59378 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:50:31.696230   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 18:50:31.696261   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696509   59378 buildroot.go:166] provisioning hostname "no-preload-371663"
	I0725 18:50:31.696536   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.696714   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.699042   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699322   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.699359   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.699484   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.699701   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.699968   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.700164   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.700480   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.700503   59378 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-371663 && echo "no-preload-371663" | sudo tee /etc/hostname
	I0725 18:50:31.826044   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-371663
	
	I0725 18:50:31.826069   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.828951   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829261   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.829313   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.829483   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:31.829695   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.829878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:31.830065   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:31.830274   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:31.830449   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:31.830466   59378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-371663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-371663/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-371663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:50:31.948518   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:50:31.948561   59378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19326-5877/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-5877/.minikube}
	I0725 18:50:31.948739   59378 buildroot.go:174] setting up certificates
	I0725 18:50:31.948753   59378 provision.go:84] configureAuth start
	I0725 18:50:31.948771   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetMachineName
	I0725 18:50:31.949045   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:31.951790   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952169   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.952194   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.952363   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:31.954317   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954610   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:31.954633   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:31.954770   59378 provision.go:143] copyHostCerts
	I0725 18:50:31.954835   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem, removing ...
	I0725 18:50:31.954848   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem
	I0725 18:50:31.954901   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/key.pem (1679 bytes)
	I0725 18:50:31.954987   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem, removing ...
	I0725 18:50:31.954997   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem
	I0725 18:50:31.955021   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/ca.pem (1078 bytes)
	I0725 18:50:31.955074   59378 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem, removing ...
	I0725 18:50:31.955081   59378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem
	I0725 18:50:31.955097   59378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-5877/.minikube/cert.pem (1123 bytes)
	I0725 18:50:31.955149   59378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem org=jenkins.no-preload-371663 san=[127.0.0.1 192.168.72.62 localhost minikube no-preload-371663]
	I0725 18:50:32.038369   59378 provision.go:177] copyRemoteCerts
	I0725 18:50:32.038427   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:50:32.038448   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.041392   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041787   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.041823   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.041961   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.042148   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.042322   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.042454   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.130425   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 18:50:32.153447   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 18:50:32.179831   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 18:50:32.202512   59378 provision.go:87] duration metric: took 253.73326ms to configureAuth
	I0725 18:50:32.202539   59378 buildroot.go:189] setting minikube options for container-runtime
	I0725 18:50:32.202722   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:32.202787   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.205038   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205415   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.205445   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.205666   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.205853   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206022   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.206162   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.206347   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.206543   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.206569   59378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0725 18:50:32.483108   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0725 18:50:32.483135   59378 machine.go:97] duration metric: took 902.412636ms to provisionDockerMachine
	I0725 18:50:32.483147   59378 start.go:293] postStartSetup for "no-preload-371663" (driver="kvm2")
	I0725 18:50:32.483162   59378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:50:32.483182   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.483495   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:50:32.483525   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.486096   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486457   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.486484   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.486662   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.486856   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.487002   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.487133   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.575210   59378 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:50:32.579169   59378 info.go:137] Remote host: Buildroot 2023.02.9
	I0725 18:50:32.579196   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/addons for local assets ...
	I0725 18:50:32.579278   59378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-5877/.minikube/files for local assets ...
	I0725 18:50:32.579383   59378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem -> 130592.pem in /etc/ssl/certs
	I0725 18:50:32.579558   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 18:50:32.588619   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:32.611429   59378 start.go:296] duration metric: took 128.267646ms for postStartSetup
	I0725 18:50:32.611471   59378 fix.go:56] duration metric: took 18.430282963s for fixHost
	I0725 18:50:32.611493   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.614328   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614667   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.614696   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.614878   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.615100   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615260   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.615408   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.615587   59378 main.go:141] libmachine: Using SSH client type: native
	I0725 18:50:32.615848   59378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.62 22 <nil> <nil>}
	I0725 18:50:32.615863   59378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 18:50:32.724784   59378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721933432.694745980
	
	I0725 18:50:32.724810   59378 fix.go:216] guest clock: 1721933432.694745980
	I0725 18:50:32.724822   59378 fix.go:229] Guest: 2024-07-25 18:50:32.69474598 +0000 UTC Remote: 2024-07-25 18:50:32.611474903 +0000 UTC m=+371.708292453 (delta=83.271077ms)
	I0725 18:50:32.724850   59378 fix.go:200] guest clock delta is within tolerance: 83.271077ms
	I0725 18:50:32.724864   59378 start.go:83] releasing machines lock for "no-preload-371663", held for 18.543706361s
	I0725 18:50:32.724891   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.725152   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:32.727958   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728294   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.728340   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.728478   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.728957   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729091   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:32.729192   59378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:50:32.729243   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.729319   59378 ssh_runner.go:195] Run: cat /version.json
	I0725 18:50:32.729347   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:32.731757   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732040   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732063   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732081   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732196   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732384   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.732538   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:32.732557   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:32.732562   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.732734   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:32.732734   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.732890   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:32.733041   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:32.733164   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:32.845665   59378 ssh_runner.go:195] Run: systemctl --version
	I0725 18:50:32.851484   59378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0725 18:50:32.994671   59378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 18:50:33.000655   59378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 18:50:33.000718   59378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:50:33.016541   59378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 18:50:33.016570   59378 start.go:495] detecting cgroup driver to use...
	I0725 18:50:33.016634   59378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 18:50:33.032473   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 18:50:33.046063   59378 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:50:33.046126   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:50:33.059249   59378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:50:33.072607   59378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:50:33.204647   59378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:50:33.353644   59378 docker.go:233] disabling docker service ...
	I0725 18:50:33.353719   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:50:33.368162   59378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:50:33.380709   59378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:50:33.521954   59378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:50:33.656011   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:50:33.668858   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:50:33.685751   59378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0725 18:50:33.685826   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.695022   59378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0725 18:50:33.695106   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.704447   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.713600   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.722782   59378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:50:33.733635   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.744226   59378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.761049   59378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0725 18:50:33.771689   59378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:50:33.781648   59378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0725 18:50:33.781695   59378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0725 18:50:33.794549   59378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:50:33.803765   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:33.915398   59378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0725 18:50:34.054477   59378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0725 18:50:34.054535   59378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0725 18:50:34.058998   59378 start.go:563] Will wait 60s for crictl version
	I0725 18:50:34.059058   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.062552   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:50:34.105552   59378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0725 18:50:34.105616   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.134591   59378 ssh_runner.go:195] Run: crio --version
	I0725 18:50:34.166581   59378 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0725 18:50:34.167725   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetIP
	I0725 18:50:34.170389   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.170838   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:34.170869   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:34.171014   59378 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0725 18:50:34.174860   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:34.186830   59378 kubeadm.go:883] updating cluster {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:50:34.186934   59378 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 18:50:34.186964   59378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:50:34.221834   59378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0725 18:50:34.221863   59378 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 18:50:34.221911   59378 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.221962   59378 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.221975   59378 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.221994   59378 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 18:50:34.222013   59378 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.221933   59378 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.222080   59378 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.222307   59378 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223376   59378 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.223405   59378 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 18:50:34.223394   59378 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:34.223416   59378 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.223385   59378 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.223445   59378 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.223639   59378 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.223759   59378 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.460560   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0725 18:50:34.464591   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.478896   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.494335   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.507397   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.519589   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.524374   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.639570   59378 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0725 18:50:34.639620   59378 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.639628   59378 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0725 18:50:34.639664   59378 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.639678   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639701   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639728   59378 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0725 18:50:34.639749   59378 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.639756   59378 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0725 18:50:34.639710   59378 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0725 18:50:34.639789   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639791   59378 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.639793   59378 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.639815   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.639822   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660351   59378 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0725 18:50:34.660401   59378 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.660418   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0725 18:50:34.660438   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 18:50:34.660446   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:34.660488   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 18:50:34.660530   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0725 18:50:34.660621   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 18:50:34.748020   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 18:50:34.748120   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748133   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.748181   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.748204   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:34.748254   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:34.761895   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.761960   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0725 18:50:34.762002   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:34.762056   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:34.762069   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 18:50:34.766440   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0725 18:50:34.766458   59378 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766478   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0725 18:50:34.766493   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0725 18:50:34.766612   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0725 18:50:34.776491   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0725 18:50:34.806227   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0725 18:50:34.806283   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:34.806386   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:35.506093   59378 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:32.098641   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:34.099078   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:31.540443   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.039950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:32.539852   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.039523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:33.539582   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.040355   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:34.539951   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.040161   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.540076   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:36.040195   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:35.787650   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:37.788363   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:36.755933   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.989415896s)
	I0725 18:50:36.755967   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0725 18:50:36.755980   59378 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.249846616s)
	I0725 18:50:36.756026   59378 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0725 18:50:36.755988   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.756064   59378 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.756113   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:50:36.756116   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0725 18:50:36.755938   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.949524568s)
	I0725 18:50:36.756281   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0725 18:50:38.622350   59378 ssh_runner.go:235] Completed: which crictl: (1.866164977s)
	I0725 18:50:38.622426   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.866163984s)
	I0725 18:50:38.622504   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0725 18:50:38.622540   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622604   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0725 18:50:38.622432   59378 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:36.599286   60732 node_ready.go:53] node "embed-certs-646344" has status "Ready":"False"
	I0725 18:50:37.098495   60732 node_ready.go:49] node "embed-certs-646344" has status "Ready":"True"
	I0725 18:50:37.098517   60732 node_ready.go:38] duration metric: took 7.003335292s for node "embed-certs-646344" to be "Ready" ...
	I0725 18:50:37.098526   60732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:37.104721   60732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109765   60732 pod_ready.go:92] pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.109788   60732 pod_ready.go:81] duration metric: took 5.033244ms for pod "coredns-7db6d8ff4d-89vvx" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.109798   60732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113639   60732 pod_ready.go:92] pod "etcd-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:37.113661   60732 pod_ready.go:81] duration metric: took 3.854986ms for pod "etcd-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:37.113672   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.120875   60732 pod_ready.go:102] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:39.620552   60732 pod_ready.go:92] pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:39.620573   60732 pod_ready.go:81] duration metric: took 2.506893984s for pod "kube-apiserver-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:39.620583   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628931   60732 pod_ready.go:92] pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.628959   60732 pod_ready.go:81] duration metric: took 1.008369558s for pod "kube-controller-manager-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.628973   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634812   60732 pod_ready.go:92] pod "kube-proxy-xk2lq" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:40.634840   60732 pod_ready.go:81] duration metric: took 5.858603ms for pod "kube-proxy-xk2lq" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:40.634853   60732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:36.540043   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.039832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:37.540456   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.039553   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:38.539530   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.040246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:39.539520   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.039506   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.539963   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:41.039590   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:40.290126   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:42.787353   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.108821   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.486186911s)
	I0725 18:50:41.108854   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0725 18:50:41.108878   59378 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108884   59378 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.486217866s)
	I0725 18:50:41.108919   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0725 18:50:41.108925   59378 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 18:50:41.109010   59378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366140   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.257196486s)
	I0725 18:50:44.366170   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0725 18:50:44.366175   59378 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257147663s)
	I0725 18:50:44.366192   59378 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0725 18:50:44.366206   59378 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:44.366252   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0725 18:50:45.013042   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 18:50:45.013079   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:45.013131   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0725 18:50:41.641738   60732 pod_ready.go:92] pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace has status "Ready":"True"
	I0725 18:50:41.641758   60732 pod_ready.go:81] duration metric: took 1.006897558s for pod "kube-scheduler-embed-certs-646344" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:41.641768   60732 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:43.648859   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.147477   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:41.539822   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.039895   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:42.539947   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.040433   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:43.540098   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.040089   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:44.540140   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.040238   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.539529   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:46.040232   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:45.287326   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:47.288029   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.372000   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.358829497s)
	I0725 18:50:46.372038   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0725 18:50:46.372056   59378 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:46.372117   59378 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0725 18:50:48.326922   59378 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954778301s)
	I0725 18:50:48.326952   59378 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19326-5877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0725 18:50:48.326981   59378 cache_images.go:123] Successfully loaded all cached images
	I0725 18:50:48.326987   59378 cache_images.go:92] duration metric: took 14.105111756s to LoadCachedImages
	I0725 18:50:48.326998   59378 kubeadm.go:934] updating node { 192.168.72.62 8443 v1.31.0-beta.0 crio true true} ...
	I0725 18:50:48.327229   59378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-371663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:50:48.327311   59378 ssh_runner.go:195] Run: crio config
	I0725 18:50:48.380082   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:48.380104   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:48.380116   59378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:50:48.380141   59378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.62 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-371663 NodeName:no-preload-371663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:50:48.380276   59378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-371663"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:50:48.380365   59378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0725 18:50:48.390309   59378 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:50:48.390375   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:50:48.399357   59378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0725 18:50:48.426673   59378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0725 18:50:48.443648   59378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0725 18:50:48.460908   59378 ssh_runner.go:195] Run: grep 192.168.72.62	control-plane.minikube.internal$ /etc/hosts
	I0725 18:50:48.464505   59378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:50:48.475937   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:48.598976   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:48.614468   59378 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663 for IP: 192.168.72.62
	I0725 18:50:48.614495   59378 certs.go:194] generating shared ca certs ...
	I0725 18:50:48.614511   59378 certs.go:226] acquiring lock for ca certs: {Name:mkae961b8e7098592ad63fee7e911c3a838ab04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:48.614683   59378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key
	I0725 18:50:48.614722   59378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key
	I0725 18:50:48.614732   59378 certs.go:256] generating profile certs ...
	I0725 18:50:48.614802   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.key
	I0725 18:50:48.614860   59378 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key.1b99cd2e
	I0725 18:50:48.614894   59378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key
	I0725 18:50:48.615018   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem (1338 bytes)
	W0725 18:50:48.615047   59378 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059_empty.pem, impossibly tiny 0 bytes
	I0725 18:50:48.615055   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 18:50:48.615091   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/ca.pem (1078 bytes)
	I0725 18:50:48.615150   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:50:48.615204   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/certs/key.pem (1679 bytes)
	I0725 18:50:48.615259   59378 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem (1708 bytes)
	I0725 18:50:48.615987   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:50:48.647327   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0725 18:50:48.689347   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:50:48.718281   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:50:48.749086   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0725 18:50:48.775795   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:50:48.804894   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:50:48.827724   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:50:48.850476   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/ssl/certs/130592.pem --> /usr/share/ca-certificates/130592.pem (1708 bytes)
	I0725 18:50:48.873193   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:50:48.897778   59378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-5877/.minikube/certs/13059.pem --> /usr/share/ca-certificates/13059.pem (1338 bytes)
	I0725 18:50:48.922891   59378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:50:48.940439   59378 ssh_runner.go:195] Run: openssl version
	I0725 18:50:48.945916   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130592.pem && ln -fs /usr/share/ca-certificates/130592.pem /etc/ssl/certs/130592.pem"
	I0725 18:50:48.956285   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960454   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:41 /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.960503   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130592.pem
	I0725 18:50:48.965881   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/130592.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 18:50:48.975282   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:50:48.984697   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988899   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.988958   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:50:48.993992   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:50:49.003677   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13059.pem && ln -fs /usr/share/ca-certificates/13059.pem /etc/ssl/certs/13059.pem"
	I0725 18:50:49.013434   59378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017584   59378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:41 /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.017633   59378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13059.pem
	I0725 18:50:49.022926   59378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13059.pem /etc/ssl/certs/51391683.0"
	I0725 18:50:49.033066   59378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:50:49.037719   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 18:50:49.043668   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 18:50:49.049308   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 18:50:49.055105   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 18:50:49.060763   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 18:50:49.066635   59378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 18:50:49.072235   59378 kubeadm.go:392] StartCluster: {Name:no-preload-371663 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-371663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:50:49.072358   59378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0725 18:50:49.072426   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.107696   59378 cri.go:89] found id: ""
	I0725 18:50:49.107780   59378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:50:49.118074   59378 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 18:50:49.118098   59378 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 18:50:49.118144   59378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 18:50:49.127465   59378 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:50:49.128541   59378 kubeconfig.go:125] found "no-preload-371663" server: "https://192.168.72.62:8443"
	I0725 18:50:49.130601   59378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 18:50:49.140027   59378 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.62
	I0725 18:50:49.140074   59378 kubeadm.go:1160] stopping kube-system containers ...
	I0725 18:50:49.140087   59378 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0725 18:50:49.140148   59378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:50:49.188682   59378 cri.go:89] found id: ""
	I0725 18:50:49.188743   59378 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 18:50:49.205634   59378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:50:49.214829   59378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:50:49.214858   59378 kubeadm.go:157] found existing configuration files:
	
	I0725 18:50:49.214912   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:50:49.223758   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:50:49.223825   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:50:49.233245   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:50:49.241613   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:50:49.241669   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:50:49.249965   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.258343   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:50:49.258404   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:50:49.267058   59378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:50:49.275241   59378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:50:49.275297   59378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:50:49.284219   59378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:50:49.293754   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:49.398525   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.308879   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.505415   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.573519   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:50.655766   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:50:50.655857   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.148464   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:50.649767   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:46.539657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.039681   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:47.540207   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.040234   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:48.539937   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.039544   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.539646   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.039759   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:50.540439   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.040293   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:49.786573   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.786918   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:53.790293   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.156896   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.656267   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:51.675997   59378 api_server.go:72] duration metric: took 1.02022659s to wait for apiserver process to appear ...
	I0725 18:50:51.676029   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:50:51.676060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:51.676567   59378 api_server.go:269] stopped: https://192.168.72.62:8443/healthz: Get "https://192.168.72.62:8443/healthz": dial tcp 192.168.72.62:8443: connect: connection refused
	I0725 18:50:52.176176   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.302009   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.302043   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.302060   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.313888   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0725 18:50:54.313913   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0725 18:50:54.676316   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:54.680686   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:54.680712   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.176378   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.181169   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0725 18:50:55.181195   59378 api_server.go:103] status: https://192.168.72.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0725 18:50:55.676817   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:50:55.681072   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:50:55.689674   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:50:55.689697   59378 api_server.go:131] duration metric: took 4.013661633s to wait for apiserver health ...
	I0725 18:50:55.689705   59378 cni.go:84] Creating CNI manager for ""
	I0725 18:50:55.689711   59378 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 18:50:55.691626   59378 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 18:50:55.692856   59378 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 18:50:55.705154   59378 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 18:50:55.722942   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:50:55.735231   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:50:55.735270   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0725 18:50:55.735281   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0725 18:50:55.735294   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0725 18:50:55.735303   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0725 18:50:55.735316   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0725 18:50:55.735325   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0725 18:50:55.735338   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:50:55.735346   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0725 18:50:55.735357   59378 system_pods.go:74] duration metric: took 12.387054ms to wait for pod list to return data ...
	I0725 18:50:55.735370   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:50:55.738963   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:50:55.738984   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:50:55.738998   59378 node_conditions.go:105] duration metric: took 3.619707ms to run NodePressure ...
	I0725 18:50:55.739017   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 18:50:53.151773   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:55.647633   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:51.540537   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.040242   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:52.539493   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.039657   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:53.540427   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.039461   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:54.540246   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.039484   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:55.539605   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.040573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:56.038936   59378 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043772   59378 kubeadm.go:739] kubelet initialised
	I0725 18:50:56.043793   59378 kubeadm.go:740] duration metric: took 4.834181ms waiting for restarted kubelet to initialise ...
	I0725 18:50:56.043801   59378 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:56.050252   59378 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.055796   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055819   59378 pod_ready.go:81] duration metric: took 5.539256ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.055827   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.055845   59378 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.059725   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059745   59378 pod_ready.go:81] duration metric: took 3.890205ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.059755   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "etcd-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.059762   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.063388   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063409   59378 pod_ready.go:81] duration metric: took 3.63968ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.063419   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-apiserver-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.063427   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.126502   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126531   59378 pod_ready.go:81] duration metric: took 63.090083ms for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.126544   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.126554   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.526433   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526465   59378 pod_ready.go:81] duration metric: took 399.900344ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.526477   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-proxy-bf9rt" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.526485   59378 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:56.926658   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926686   59378 pod_ready.go:81] duration metric: took 400.192009ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:56.926696   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "kube-scheduler-no-preload-371663" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:56.926702   59378 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:50:57.326373   59378 pod_ready.go:97] node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326398   59378 pod_ready.go:81] duration metric: took 399.68759ms for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:50:57.326408   59378 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-371663" hosting pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.326415   59378 pod_ready.go:38] duration metric: took 1.282607524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:50:57.326433   59378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:50:57.338819   59378 ops.go:34] apiserver oom_adj: -16
	I0725 18:50:57.338836   59378 kubeadm.go:597] duration metric: took 8.220732382s to restartPrimaryControlPlane
	I0725 18:50:57.338845   59378 kubeadm.go:394] duration metric: took 8.26661565s to StartCluster
	I0725 18:50:57.338862   59378 settings.go:142] acquiring lock: {Name:mkf2f664ed70ea7e670a0c3a168f3a9f1e2fa575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.338938   59378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:50:57.341213   59378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-5877/kubeconfig: {Name:mka85555d68e3eaa85656655039ec78e850a5f01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:50:57.341506   59378 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.62 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0725 18:50:57.341574   59378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 18:50:57.341660   59378 addons.go:69] Setting storage-provisioner=true in profile "no-preload-371663"
	I0725 18:50:57.341684   59378 config.go:182] Loaded profile config "no-preload-371663": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0725 18:50:57.341696   59378 addons.go:234] Setting addon storage-provisioner=true in "no-preload-371663"
	I0725 18:50:57.341691   59378 addons.go:69] Setting default-storageclass=true in profile "no-preload-371663"
	W0725 18:50:57.341705   59378 addons.go:243] addon storage-provisioner should already be in state true
	I0725 18:50:57.341719   59378 addons.go:69] Setting metrics-server=true in profile "no-preload-371663"
	I0725 18:50:57.341737   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.341776   59378 addons.go:234] Setting addon metrics-server=true in "no-preload-371663"
	W0725 18:50:57.341790   59378 addons.go:243] addon metrics-server should already be in state true
	I0725 18:50:57.341727   59378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-371663"
	I0725 18:50:57.341827   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.342109   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342146   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342157   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342185   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.342205   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.342238   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.343259   59378 out.go:177] * Verifying Kubernetes components...
	I0725 18:50:57.344618   59378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:50:57.359231   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0725 18:50:57.359295   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41709
	I0725 18:50:57.359759   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360261   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.360528   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360554   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.360885   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.360970   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.360989   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.361279   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.361299   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.361452   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.361551   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0725 18:50:57.361947   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.361954   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.362450   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.362468   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.362901   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.363495   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.363514   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.365316   59378 addons.go:234] Setting addon default-storageclass=true in "no-preload-371663"
	W0725 18:50:57.365329   59378 addons.go:243] addon default-storageclass should already be in state true
	I0725 18:50:57.365349   59378 host.go:66] Checking if "no-preload-371663" exists ...
	I0725 18:50:57.365748   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.365785   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.377970   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
	I0725 18:50:57.379022   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.379523   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.379543   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.379963   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.380124   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.382257   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0725 18:50:57.382648   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.382989   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I0725 18:50:57.383098   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383110   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.383292   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.383365   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.383456   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.383764   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.383854   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.383876   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.384308   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.384905   59378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:50:57.384948   59378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:50:57.385117   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.385388   59378 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 18:50:57.386699   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:50:57.386716   59378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:50:57.386716   59378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:50:57.386784   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.388097   59378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.388127   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:50:57.388142   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.389322   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389752   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.389782   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.389902   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.390094   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.390251   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.390402   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.391324   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391699   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.391723   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.391870   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.392024   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.392156   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.392289   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.429920   59378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
	I0725 18:50:57.430364   59378 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:50:57.430865   59378 main.go:141] libmachine: Using API Version  1
	I0725 18:50:57.430883   59378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:50:57.431250   59378 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:50:57.431459   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetState
	I0725 18:50:57.433381   59378 main.go:141] libmachine: (no-preload-371663) Calling .DriverName
	I0725 18:50:57.433618   59378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.433636   59378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:50:57.433655   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHHostname
	I0725 18:50:57.436318   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437075   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHPort
	I0725 18:50:57.437100   59378 main.go:141] libmachine: (no-preload-371663) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:2b:39", ip: ""} in network mk-no-preload-371663: {Iface:virbr4 ExpiryTime:2024-07-25 19:50:25 +0000 UTC Type:0 Mac:52:54:00:dc:2b:39 Iaid: IPaddr:192.168.72.62 Prefix:24 Hostname:no-preload-371663 Clientid:01:52:54:00:dc:2b:39}
	I0725 18:50:57.437139   59378 main.go:141] libmachine: (no-preload-371663) DBG | domain no-preload-371663 has defined IP address 192.168.72.62 and MAC address 52:54:00:dc:2b:39 in network mk-no-preload-371663
	I0725 18:50:57.437253   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHKeyPath
	I0725 18:50:57.437431   59378 main.go:141] libmachine: (no-preload-371663) Calling .GetSSHUsername
	I0725 18:50:57.437629   59378 sshutil.go:53] new ssh client: &{IP:192.168.72.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/no-preload-371663/id_rsa Username:docker}
	I0725 18:50:57.533461   59378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:50:57.551609   59378 node_ready.go:35] waiting up to 6m0s for node "no-preload-371663" to be "Ready" ...
	I0725 18:50:57.663269   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:50:57.663295   59378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 18:50:57.676948   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:50:57.698961   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:50:57.699589   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:50:57.699608   59378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:50:57.732899   59378 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:57.732928   59378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:50:57.783734   59378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:50:58.930567   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.231552088s)
	I0725 18:50:58.930632   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930653   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930686   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.146908463s)
	I0725 18:50:58.930684   59378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.253701775s)
	I0725 18:50:58.930724   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930737   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.930751   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.930739   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931112   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931129   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931137   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931143   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931143   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931150   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931159   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931167   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.931171   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931178   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.931237   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931349   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931363   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.931373   59378 addons.go:475] Verifying addon metrics-server=true in "no-preload-371663"
	I0725 18:50:58.931520   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.931559   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.931576   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932215   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932238   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.932267   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.932277   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.932506   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.932541   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.932556   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940231   59378 main.go:141] libmachine: Making call to close driver server
	I0725 18:50:58.940252   59378 main.go:141] libmachine: (no-preload-371663) Calling .Close
	I0725 18:50:58.940516   59378 main.go:141] libmachine: Successfully made call to close driver server
	I0725 18:50:58.940535   59378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0725 18:50:58.940519   59378 main.go:141] libmachine: (no-preload-371663) DBG | Closing plugin on server side
	I0725 18:50:58.942747   59378 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0725 18:50:56.286642   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.787357   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:58.943983   59378 addons.go:510] duration metric: took 1.602421244s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0725 18:50:59.554933   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:50:57.648530   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:00.147626   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:50:56.539704   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.039573   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:57.539523   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.040168   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:58.540038   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.040304   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:50:59.540248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.039609   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:00.540022   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.039843   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:01.285836   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:03.287743   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.555887   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:04.056538   59378 node_ready.go:53] node "no-preload-371663" has status "Ready":"False"
	I0725 18:51:05.055354   59378 node_ready.go:49] node "no-preload-371663" has status "Ready":"True"
	I0725 18:51:05.055378   59378 node_ready.go:38] duration metric: took 7.50373959s for node "no-preload-371663" to be "Ready" ...
	I0725 18:51:05.055389   59378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:51:05.061464   59378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066947   59378 pod_ready.go:92] pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.066967   59378 pod_ready.go:81] duration metric: took 5.477209ms for pod "coredns-5cfdc65f69-lq97z" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.066978   59378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071413   59378 pod_ready.go:92] pod "etcd-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.071431   59378 pod_ready.go:81] duration metric: took 4.445948ms for pod "etcd-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.071441   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076020   59378 pod_ready.go:92] pod "kube-apiserver-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:05.076042   59378 pod_ready.go:81] duration metric: took 4.593495ms for pod "kube-apiserver-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:05.076053   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:02.648362   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:04.648959   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:01.539808   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.039515   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:02.540034   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.040266   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:03.539829   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.039496   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:04.540260   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.040236   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:05.540450   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:06.039595   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:06.039675   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:06.077020   60176 cri.go:89] found id: ""
	I0725 18:51:06.077048   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.077058   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:06.077066   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:06.077125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:06.109173   60176 cri.go:89] found id: ""
	I0725 18:51:06.109203   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.109213   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:06.109220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:06.109283   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:06.141838   60176 cri.go:89] found id: ""
	I0725 18:51:06.141875   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.141882   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:06.141888   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:06.141947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:06.175036   60176 cri.go:89] found id: ""
	I0725 18:51:06.175063   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.175074   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:06.175081   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:06.175144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:06.207497   60176 cri.go:89] found id: ""
	I0725 18:51:06.207519   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.207527   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:06.207532   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:06.207589   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:06.241910   60176 cri.go:89] found id: ""
	I0725 18:51:06.241936   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.241943   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:06.241948   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:06.242001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:06.273353   60176 cri.go:89] found id: ""
	I0725 18:51:06.273381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.273391   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:06.273398   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:06.273472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:06.307358   60176 cri.go:89] found id: ""
	I0725 18:51:06.307381   60176 logs.go:276] 0 containers: []
	W0725 18:51:06.307391   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:06.307401   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:06.307415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:06.360759   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:06.360792   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:06.373930   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:06.373956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:51:05.787345   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:08.287436   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:07.081865   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.082937   59378 pod_ready.go:102] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:10.583975   59378 pod_ready.go:92] pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.584001   59378 pod_ready.go:81] duration metric: took 5.507938695s for pod "kube-controller-manager-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.584015   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588959   59378 pod_ready.go:92] pod "kube-proxy-bf9rt" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.588978   59378 pod_ready.go:81] duration metric: took 4.956126ms for pod "kube-proxy-bf9rt" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.588986   59378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593238   59378 pod_ready.go:92] pod "kube-scheduler-no-preload-371663" in "kube-system" namespace has status "Ready":"True"
	I0725 18:51:10.593255   59378 pod_ready.go:81] duration metric: took 4.263169ms for pod "kube-scheduler-no-preload-371663" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:10.593263   59378 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	I0725 18:51:07.147874   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:09.649266   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:51:06.488979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:06.489003   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:06.489018   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:06.553782   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:06.553813   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.093966   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:09.106176   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:09.106242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:09.143847   60176 cri.go:89] found id: ""
	I0725 18:51:09.143872   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.143880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:09.143885   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:09.143936   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:09.178605   60176 cri.go:89] found id: ""
	I0725 18:51:09.178636   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.178647   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:09.178654   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:09.178715   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:09.211866   60176 cri.go:89] found id: ""
	I0725 18:51:09.211892   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.211901   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:09.211906   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:09.211957   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:09.244343   60176 cri.go:89] found id: ""
	I0725 18:51:09.244371   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.244381   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:09.244389   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:09.244445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:09.279416   60176 cri.go:89] found id: ""
	I0725 18:51:09.279440   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.279448   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:09.279463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:09.279530   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:09.317039   60176 cri.go:89] found id: ""
	I0725 18:51:09.317064   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.317071   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:09.317077   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:09.317123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:09.347997   60176 cri.go:89] found id: ""
	I0725 18:51:09.348031   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.348042   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:09.348049   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:09.348107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:09.380485   60176 cri.go:89] found id: ""
	I0725 18:51:09.380514   60176 logs.go:276] 0 containers: []
	W0725 18:51:09.380524   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:09.380535   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:09.380560   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:09.451881   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:09.451920   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:09.488427   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:09.488454   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:09.538096   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:09.538142   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:09.551001   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:09.551026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:09.628882   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:10.287604   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.787008   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.600101   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:15.102797   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.149625   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:14.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:12.129787   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:12.141852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:12.141915   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:12.178227   60176 cri.go:89] found id: ""
	I0725 18:51:12.178257   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.178266   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:12.178271   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:12.178329   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:12.209154   60176 cri.go:89] found id: ""
	I0725 18:51:12.209179   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.209186   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:12.209190   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:12.209238   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:12.244091   60176 cri.go:89] found id: ""
	I0725 18:51:12.244119   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.244127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:12.244134   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:12.244183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:12.277865   60176 cri.go:89] found id: ""
	I0725 18:51:12.277894   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.277906   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:12.277911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:12.277958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:12.311172   60176 cri.go:89] found id: ""
	I0725 18:51:12.311196   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.311207   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:12.311214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:12.311274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:12.341668   60176 cri.go:89] found id: ""
	I0725 18:51:12.341696   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.341706   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:12.341714   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:12.341775   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:12.375342   60176 cri.go:89] found id: ""
	I0725 18:51:12.375372   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.375383   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:12.375390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:12.375449   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:12.409783   60176 cri.go:89] found id: ""
	I0725 18:51:12.409807   60176 logs.go:276] 0 containers: []
	W0725 18:51:12.409814   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:12.409822   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:12.409834   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:12.484503   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:12.484546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:12.522948   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:12.522974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:12.573975   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:12.574008   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:12.587600   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:12.587628   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:12.660403   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.161385   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:15.174773   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:15.174845   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:15.206845   60176 cri.go:89] found id: ""
	I0725 18:51:15.206871   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.206882   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:15.206889   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:15.206949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:15.239306   60176 cri.go:89] found id: ""
	I0725 18:51:15.239335   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.239344   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:15.239350   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:15.239437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:15.276152   60176 cri.go:89] found id: ""
	I0725 18:51:15.276187   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.276198   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:15.276207   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:15.276265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:15.309616   60176 cri.go:89] found id: ""
	I0725 18:51:15.309647   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.309659   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:15.309667   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:15.309729   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:15.343938   60176 cri.go:89] found id: ""
	I0725 18:51:15.343967   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.343978   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:15.343985   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:15.344051   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:15.380268   60176 cri.go:89] found id: ""
	I0725 18:51:15.380298   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.380310   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:15.380317   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:15.380448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:15.421291   60176 cri.go:89] found id: ""
	I0725 18:51:15.421337   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.421347   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:15.421353   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:15.421408   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:15.466805   60176 cri.go:89] found id: ""
	I0725 18:51:15.466826   60176 logs.go:276] 0 containers: []
	W0725 18:51:15.466835   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:15.466845   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:15.466859   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:15.513464   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:15.513489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:15.567742   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:15.567775   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:15.583613   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:15.583647   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:15.653613   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:15.653637   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:15.653651   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:15.287256   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.786753   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.599678   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.600015   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:17.147792   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:19.148724   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:18.230294   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:18.244269   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:18.244352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:18.282255   60176 cri.go:89] found id: ""
	I0725 18:51:18.282281   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.282291   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:18.282298   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:18.282377   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:18.316217   60176 cri.go:89] found id: ""
	I0725 18:51:18.316250   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.316261   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:18.316269   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:18.316349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:18.347730   60176 cri.go:89] found id: ""
	I0725 18:51:18.347756   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.347764   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:18.347769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:18.347815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:18.379968   60176 cri.go:89] found id: ""
	I0725 18:51:18.379991   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.379999   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:18.380006   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:18.380062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:18.415621   60176 cri.go:89] found id: ""
	I0725 18:51:18.415644   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.415652   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:18.415657   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:18.415704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:18.452073   60176 cri.go:89] found id: ""
	I0725 18:51:18.452101   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.452109   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:18.452115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:18.452171   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:18.483337   60176 cri.go:89] found id: ""
	I0725 18:51:18.483382   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.483390   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:18.483396   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:18.483440   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:18.516941   60176 cri.go:89] found id: ""
	I0725 18:51:18.516966   60176 logs.go:276] 0 containers: []
	W0725 18:51:18.516976   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:18.516987   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:18.517002   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:18.587295   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:18.587321   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:18.587338   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:18.666539   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:18.666569   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:18.707434   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:18.707465   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:18.761893   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:18.761932   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.276464   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:21.291939   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:21.292011   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:21.326022   60176 cri.go:89] found id: ""
	I0725 18:51:21.326055   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.326066   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:21.326073   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:21.326130   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:21.366081   60176 cri.go:89] found id: ""
	I0725 18:51:21.366104   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.366112   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:21.366117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:21.366165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:20.287325   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.287799   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:22.101134   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:24.600119   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.647763   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:23.648088   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:25.649170   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:21.403086   60176 cri.go:89] found id: ""
	I0725 18:51:21.403111   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.403122   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:21.403128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:21.403208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:21.439268   60176 cri.go:89] found id: ""
	I0725 18:51:21.439297   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.439305   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:21.439310   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:21.439359   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:21.483601   60176 cri.go:89] found id: ""
	I0725 18:51:21.483631   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.483639   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:21.483645   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:21.483704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:21.519061   60176 cri.go:89] found id: ""
	I0725 18:51:21.519093   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.519103   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:21.519111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:21.519186   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:21.548781   60176 cri.go:89] found id: ""
	I0725 18:51:21.548806   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.548814   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:21.548820   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:21.548881   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:21.581940   60176 cri.go:89] found id: ""
	I0725 18:51:21.581963   60176 logs.go:276] 0 containers: []
	W0725 18:51:21.581970   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:21.581979   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:21.581991   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:21.634758   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:21.634795   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:21.648358   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:21.648382   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:21.716109   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:21.716133   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:21.716149   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:21.794003   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:21.794030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.331731   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:24.344646   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:24.344709   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:24.385373   60176 cri.go:89] found id: ""
	I0725 18:51:24.385395   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.385403   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:24.385408   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:24.385453   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:24.417015   60176 cri.go:89] found id: ""
	I0725 18:51:24.417044   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.417054   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:24.417061   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:24.417125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:24.457093   60176 cri.go:89] found id: ""
	I0725 18:51:24.457118   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.457127   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:24.457132   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:24.457197   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:24.489155   60176 cri.go:89] found id: ""
	I0725 18:51:24.489183   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.489192   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:24.489197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:24.489253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:24.521907   60176 cri.go:89] found id: ""
	I0725 18:51:24.521934   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.521943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:24.521949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:24.522006   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:24.553652   60176 cri.go:89] found id: ""
	I0725 18:51:24.553688   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.553698   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:24.553705   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:24.553765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:24.587957   60176 cri.go:89] found id: ""
	I0725 18:51:24.587989   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.587997   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:24.588002   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:24.588060   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:24.623564   60176 cri.go:89] found id: ""
	I0725 18:51:24.623591   60176 logs.go:276] 0 containers: []
	W0725 18:51:24.623600   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:24.623609   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:24.623624   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:24.676176   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:24.676208   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:24.689179   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:24.689202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:24.761900   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:24.761928   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:24.761943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:24.845021   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:24.845058   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:24.287960   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:26.288704   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.788851   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.099186   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:29.100563   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:28.147374   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:30.148158   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:27.384900   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:27.398947   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:27.399009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:27.431604   60176 cri.go:89] found id: ""
	I0725 18:51:27.431632   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.431641   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:27.431648   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:27.431698   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:27.464167   60176 cri.go:89] found id: ""
	I0725 18:51:27.464201   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.464212   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:27.464220   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:27.464279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:27.497951   60176 cri.go:89] found id: ""
	I0725 18:51:27.497985   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.497996   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:27.498003   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:27.498056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:27.535363   60176 cri.go:89] found id: ""
	I0725 18:51:27.535389   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.535401   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:27.535406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:27.535452   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:27.565506   60176 cri.go:89] found id: ""
	I0725 18:51:27.565531   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.565541   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:27.565548   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:27.565615   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:27.595635   60176 cri.go:89] found id: ""
	I0725 18:51:27.595662   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.595672   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:27.595678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:27.595734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:27.627482   60176 cri.go:89] found id: ""
	I0725 18:51:27.627511   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.627522   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:27.627529   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:27.627596   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:27.663481   60176 cri.go:89] found id: ""
	I0725 18:51:27.663507   60176 logs.go:276] 0 containers: []
	W0725 18:51:27.663517   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:27.663530   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:27.663544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:27.746487   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:27.746519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:27.783100   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:27.783128   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:27.834865   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:27.834895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:27.849097   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:27.849124   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:27.914406   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:30.415417   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:30.429086   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:30.429151   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:30.470514   60176 cri.go:89] found id: ""
	I0725 18:51:30.470538   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.470561   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:30.470569   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:30.470629   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:30.503903   60176 cri.go:89] found id: ""
	I0725 18:51:30.503931   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.503942   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:30.503950   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:30.504014   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:30.535562   60176 cri.go:89] found id: ""
	I0725 18:51:30.535589   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.535597   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:30.535602   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:30.535667   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:30.567435   60176 cri.go:89] found id: ""
	I0725 18:51:30.567461   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.567471   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:30.567478   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:30.567538   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:30.604430   60176 cri.go:89] found id: ""
	I0725 18:51:30.604455   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.604465   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:30.604471   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:30.604540   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:30.644788   60176 cri.go:89] found id: ""
	I0725 18:51:30.644814   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.644834   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:30.644843   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:30.644908   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:30.678530   60176 cri.go:89] found id: ""
	I0725 18:51:30.678572   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.678585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:30.678593   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:30.678668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:30.713090   60176 cri.go:89] found id: ""
	I0725 18:51:30.713112   60176 logs.go:276] 0 containers: []
	W0725 18:51:30.713120   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:30.713128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:30.713141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:30.792075   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:30.792106   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:30.829452   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:30.829482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:30.879437   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:30.879474   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:30.892281   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:30.892308   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:30.959814   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:31.286895   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.786731   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:31.599727   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.600800   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:35.601282   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:32.647508   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:34.648594   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:33.460838   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:33.474242   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:33.474351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:33.508097   60176 cri.go:89] found id: ""
	I0725 18:51:33.508125   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.508134   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:33.508140   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:33.508188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:33.542576   60176 cri.go:89] found id: ""
	I0725 18:51:33.542605   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.542612   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:33.542618   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:33.542666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:33.576079   60176 cri.go:89] found id: ""
	I0725 18:51:33.576106   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.576115   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:33.576122   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:33.576187   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:33.610618   60176 cri.go:89] found id: ""
	I0725 18:51:33.610639   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.610646   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:33.610651   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:33.610702   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:33.641925   60176 cri.go:89] found id: ""
	I0725 18:51:33.641960   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.641972   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:33.641979   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:33.642047   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:33.675283   60176 cri.go:89] found id: ""
	I0725 18:51:33.675318   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.675333   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:33.675346   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:33.675412   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:33.707991   60176 cri.go:89] found id: ""
	I0725 18:51:33.708017   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.708026   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:33.708034   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:33.708094   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:33.744209   60176 cri.go:89] found id: ""
	I0725 18:51:33.744237   60176 logs.go:276] 0 containers: []
	W0725 18:51:33.744247   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:33.744258   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:33.744273   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:33.794620   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:33.794648   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:33.807089   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:33.807118   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:33.870937   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:33.870960   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:33.870976   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:33.953214   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:33.953249   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:36.287050   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.788127   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:38.100230   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:40.600037   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:37.147276   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:39.147994   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:36.491625   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:36.504949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:36.505023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:36.538077   60176 cri.go:89] found id: ""
	I0725 18:51:36.538101   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.538109   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:36.538114   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:36.538165   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:36.570239   60176 cri.go:89] found id: ""
	I0725 18:51:36.570262   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.570269   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:36.570275   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:36.570325   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:36.603096   60176 cri.go:89] found id: ""
	I0725 18:51:36.603124   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.603133   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:36.603139   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:36.603196   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:36.637479   60176 cri.go:89] found id: ""
	I0725 18:51:36.637506   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.637518   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:36.637525   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:36.637580   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:36.670834   60176 cri.go:89] found id: ""
	I0725 18:51:36.670859   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.670868   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:36.670875   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:36.670942   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:36.707825   60176 cri.go:89] found id: ""
	I0725 18:51:36.707851   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.707859   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:36.707866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:36.707924   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:36.748014   60176 cri.go:89] found id: ""
	I0725 18:51:36.748046   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.748058   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:36.748067   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:36.748132   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:36.779939   60176 cri.go:89] found id: ""
	I0725 18:51:36.779967   60176 logs.go:276] 0 containers: []
	W0725 18:51:36.779975   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:36.779982   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:36.779994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:36.836710   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:36.836741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:36.849791   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:36.849830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:36.919247   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:36.919270   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:36.919286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:36.994368   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:36.994405   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:39.530980   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:39.543355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:39.543417   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:39.576897   60176 cri.go:89] found id: ""
	I0725 18:51:39.576925   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.576935   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:39.576941   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:39.576996   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:39.610545   60176 cri.go:89] found id: ""
	I0725 18:51:39.610576   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.610584   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:39.610596   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:39.610651   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:39.642072   60176 cri.go:89] found id: ""
	I0725 18:51:39.642097   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.642107   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:39.642114   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:39.642173   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:39.673841   60176 cri.go:89] found id: ""
	I0725 18:51:39.673866   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.673874   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:39.673880   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:39.673933   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:39.706537   60176 cri.go:89] found id: ""
	I0725 18:51:39.706562   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.706571   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:39.706584   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:39.706635   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:39.744897   60176 cri.go:89] found id: ""
	I0725 18:51:39.744924   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.744935   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:39.744942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:39.745004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:39.780466   60176 cri.go:89] found id: ""
	I0725 18:51:39.780493   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.780503   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:39.780510   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:39.780581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:39.813672   60176 cri.go:89] found id: ""
	I0725 18:51:39.813694   60176 logs.go:276] 0 containers: []
	W0725 18:51:39.813701   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:39.813709   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:39.813721   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:39.862459   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:39.862489   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:39.875276   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:39.875304   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:39.941693   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:39.941715   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:39.941729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:40.017010   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:40.017055   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:41.286377   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.289761   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.600311   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.098813   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:41.647858   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:43.647939   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:45.648657   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:42.559158   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:42.571866   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:42.571945   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:42.605268   60176 cri.go:89] found id: ""
	I0725 18:51:42.605317   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.605326   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:42.605332   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:42.605392   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:42.641719   60176 cri.go:89] found id: ""
	I0725 18:51:42.641753   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.641764   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:42.641774   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:42.641837   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:42.675667   60176 cri.go:89] found id: ""
	I0725 18:51:42.675695   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.675703   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:42.675711   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:42.675773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:42.709895   60176 cri.go:89] found id: ""
	I0725 18:51:42.709923   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.709933   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:42.709940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:42.710002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:42.742278   60176 cri.go:89] found id: ""
	I0725 18:51:42.742308   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.742318   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:42.742325   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:42.742395   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:42.773623   60176 cri.go:89] found id: ""
	I0725 18:51:42.773651   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.773661   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:42.773668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:42.773727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:42.810538   60176 cri.go:89] found id: ""
	I0725 18:51:42.810566   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.810576   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:42.810583   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:42.810657   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:42.850508   60176 cri.go:89] found id: ""
	I0725 18:51:42.850530   60176 logs.go:276] 0 containers: []
	W0725 18:51:42.850537   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:42.850545   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:42.850556   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:42.901350   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:42.901389   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:42.914573   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:42.914600   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:42.978823   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:42.978852   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:42.978866   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:43.057323   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:43.057357   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:45.593677   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:45.607689   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:45.607801   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:45.640969   60176 cri.go:89] found id: ""
	I0725 18:51:45.640997   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.641007   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:45.641014   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:45.641075   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:45.672268   60176 cri.go:89] found id: ""
	I0725 18:51:45.672293   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.672300   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:45.672310   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:45.672396   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:45.705582   60176 cri.go:89] found id: ""
	I0725 18:51:45.705610   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.705618   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:45.705625   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:45.705686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:45.747705   60176 cri.go:89] found id: ""
	I0725 18:51:45.747737   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.747759   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:45.747766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:45.747815   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:45.787258   60176 cri.go:89] found id: ""
	I0725 18:51:45.787284   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.787294   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:45.787302   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:45.787366   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:45.820971   60176 cri.go:89] found id: ""
	I0725 18:51:45.820992   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.821008   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:45.821019   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:45.821068   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:45.853828   60176 cri.go:89] found id: ""
	I0725 18:51:45.853858   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.853869   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:45.853876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:45.853935   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:45.886645   60176 cri.go:89] found id: ""
	I0725 18:51:45.886672   60176 logs.go:276] 0 containers: []
	W0725 18:51:45.886682   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:45.886692   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:45.886708   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:45.953195   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:45.953223   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:45.953239   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:46.027894   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:46.027929   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:46.067935   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:46.067960   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:46.120467   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:46.120500   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:45.788103   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.287846   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:47.100357   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:49.100578   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.148035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:50.148589   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:48.634095   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:48.647390   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:48.647464   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:48.683149   60176 cri.go:89] found id: ""
	I0725 18:51:48.683171   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.683178   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:48.683203   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:48.683252   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:48.720502   60176 cri.go:89] found id: ""
	I0725 18:51:48.720529   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.720539   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:48.720546   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:48.720593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:48.752927   60176 cri.go:89] found id: ""
	I0725 18:51:48.752954   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.752962   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:48.752968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:48.753025   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:48.788434   60176 cri.go:89] found id: ""
	I0725 18:51:48.788460   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.788468   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:48.788474   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:48.788520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:48.825157   60176 cri.go:89] found id: ""
	I0725 18:51:48.825184   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.825194   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:48.825199   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:48.825248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:48.859948   60176 cri.go:89] found id: ""
	I0725 18:51:48.859973   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.859981   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:48.859986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:48.860046   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:48.894788   60176 cri.go:89] found id: ""
	I0725 18:51:48.894811   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.894819   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:48.894824   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:48.894878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:48.929619   60176 cri.go:89] found id: ""
	I0725 18:51:48.929645   60176 logs.go:276] 0 containers: []
	W0725 18:51:48.929653   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:48.929662   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:48.929675   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:49.001842   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:49.001865   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:49.001888   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:49.086265   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:49.086299   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:49.127674   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:49.127704   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:49.181388   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:49.181424   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:50.787213   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:53.287266   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.601462   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.099078   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:52.647863   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:54.648789   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:51.695119   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:51.707568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:51.707630   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:51.742936   60176 cri.go:89] found id: ""
	I0725 18:51:51.742963   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.742973   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:51.742980   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:51.743033   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:51.776584   60176 cri.go:89] found id: ""
	I0725 18:51:51.776610   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.776618   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:51.776623   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:51.776691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:51.809763   60176 cri.go:89] found id: ""
	I0725 18:51:51.809787   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.809795   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:51.809800   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:51.809846   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:51.843330   60176 cri.go:89] found id: ""
	I0725 18:51:51.843359   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.843366   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:51.843372   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:51.843428   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:51.877636   60176 cri.go:89] found id: ""
	I0725 18:51:51.877670   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.877680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:51.877685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:51.877734   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:51.911846   60176 cri.go:89] found id: ""
	I0725 18:51:51.911869   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.911876   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:51.911881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:51.911927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:51.945447   60176 cri.go:89] found id: ""
	I0725 18:51:51.945474   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.945482   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:51.945488   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:51.945539   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:51.976801   60176 cri.go:89] found id: ""
	I0725 18:51:51.976828   60176 logs.go:276] 0 containers: []
	W0725 18:51:51.976838   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:51.976848   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:51.976863   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:51.989131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:51.989158   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:52.055834   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:52.055857   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:52.055871   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:52.132360   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:52.132399   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:52.170676   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:52.170706   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:54.724654   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:54.738852   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:54.738910   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:54.772356   60176 cri.go:89] found id: ""
	I0725 18:51:54.772386   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.772396   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:54.772403   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:54.772463   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:54.805079   60176 cri.go:89] found id: ""
	I0725 18:51:54.805105   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.805115   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:54.805122   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:54.805179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:54.836276   60176 cri.go:89] found id: ""
	I0725 18:51:54.836303   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.836313   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:54.836329   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:54.836394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:54.869019   60176 cri.go:89] found id: ""
	I0725 18:51:54.869046   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.869053   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:54.869059   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:54.869108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:54.905448   60176 cri.go:89] found id: ""
	I0725 18:51:54.905475   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.905485   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:54.905492   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:54.905553   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:54.937364   60176 cri.go:89] found id: ""
	I0725 18:51:54.937387   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.937396   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:54.937401   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:54.937448   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:54.969287   60176 cri.go:89] found id: ""
	I0725 18:51:54.969322   60176 logs.go:276] 0 containers: []
	W0725 18:51:54.969333   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:54.969340   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:54.969405   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:55.002779   60176 cri.go:89] found id: ""
	I0725 18:51:55.002804   60176 logs.go:276] 0 containers: []
	W0725 18:51:55.002811   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:55.002819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:55.002830   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:51:55.015588   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:55.015614   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:55.093349   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:55.093372   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:55.093388   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:55.174006   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:55.174046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:55.211316   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:55.211347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:55.787379   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.286757   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:56.099628   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:58.100403   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:00.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.148430   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:59.648971   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:51:57.762027   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:51:57.774121   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:51:57.774194   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:51:57.814748   60176 cri.go:89] found id: ""
	I0725 18:51:57.814779   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.814790   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:51:57.814798   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:51:57.814860   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:51:57.851037   60176 cri.go:89] found id: ""
	I0725 18:51:57.851063   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.851070   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:51:57.851075   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:51:57.851123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:51:57.882717   60176 cri.go:89] found id: ""
	I0725 18:51:57.882749   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.882760   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:51:57.882768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:51:57.882830   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:51:57.917019   60176 cri.go:89] found id: ""
	I0725 18:51:57.917049   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.917059   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:51:57.917066   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:51:57.917126   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:51:57.950853   60176 cri.go:89] found id: ""
	I0725 18:51:57.950882   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.950891   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:51:57.950896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:51:57.950962   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:51:57.991946   60176 cri.go:89] found id: ""
	I0725 18:51:57.991970   60176 logs.go:276] 0 containers: []
	W0725 18:51:57.991980   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:51:57.991986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:51:57.992049   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:51:58.037572   60176 cri.go:89] found id: ""
	I0725 18:51:58.037602   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.037611   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:51:58.037617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:51:58.037679   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:51:58.073018   60176 cri.go:89] found id: ""
	I0725 18:51:58.073040   60176 logs.go:276] 0 containers: []
	W0725 18:51:58.073048   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:51:58.073056   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:51:58.073068   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:51:58.144357   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:51:58.144382   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:51:58.144398   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:51:58.224162   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:51:58.224202   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:51:58.260888   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:51:58.260914   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:51:58.313819   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:51:58.313848   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:00.826939   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:00.838883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:00.838951   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:00.872544   60176 cri.go:89] found id: ""
	I0725 18:52:00.872573   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.872584   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:00.872600   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:00.872663   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:00.903504   60176 cri.go:89] found id: ""
	I0725 18:52:00.903526   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.903533   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:00.903539   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:00.903585   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:00.938057   60176 cri.go:89] found id: ""
	I0725 18:52:00.938085   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.938095   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:00.938103   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:00.938168   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:00.970586   60176 cri.go:89] found id: ""
	I0725 18:52:00.970616   60176 logs.go:276] 0 containers: []
	W0725 18:52:00.970625   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:00.970631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:00.970699   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:01.004158   60176 cri.go:89] found id: ""
	I0725 18:52:01.004192   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.004201   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:01.004205   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:01.004265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:01.036833   60176 cri.go:89] found id: ""
	I0725 18:52:01.036862   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.036871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:01.036876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:01.036927   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:01.072207   60176 cri.go:89] found id: ""
	I0725 18:52:01.072236   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.072247   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:01.072253   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:01.072309   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:01.110805   60176 cri.go:89] found id: ""
	I0725 18:52:01.110859   60176 logs.go:276] 0 containers: []
	W0725 18:52:01.110871   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:01.110882   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:01.110898   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:01.150422   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:01.150448   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:01.198988   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:01.199026   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:01.212826   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:01.212860   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:01.282008   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:01.282034   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:01.282054   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:00.787431   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.286174   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.599299   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:05.099494   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:02.147372   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:04.147989   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.148300   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:03.865014   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:03.877335   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:03.877419   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:03.913376   60176 cri.go:89] found id: ""
	I0725 18:52:03.913406   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.913413   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:03.913420   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:03.913469   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:03.948997   60176 cri.go:89] found id: ""
	I0725 18:52:03.949022   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.949029   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:03.949034   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:03.949082   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:03.985320   60176 cri.go:89] found id: ""
	I0725 18:52:03.985353   60176 logs.go:276] 0 containers: []
	W0725 18:52:03.985361   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:03.985367   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:03.985423   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:04.019626   60176 cri.go:89] found id: ""
	I0725 18:52:04.019648   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.019656   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:04.019662   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:04.019716   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:04.050947   60176 cri.go:89] found id: ""
	I0725 18:52:04.050978   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.050989   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:04.050997   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:04.051066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:04.083581   60176 cri.go:89] found id: ""
	I0725 18:52:04.083613   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.083625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:04.083633   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:04.083712   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:04.117537   60176 cri.go:89] found id: ""
	I0725 18:52:04.117574   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.117585   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:04.117592   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:04.117652   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:04.151531   60176 cri.go:89] found id: ""
	I0725 18:52:04.151556   60176 logs.go:276] 0 containers: []
	W0725 18:52:04.151563   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:04.151575   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:04.151593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:04.201037   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:04.201067   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:04.214848   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:04.214879   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:04.281309   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:04.281338   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:04.281353   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:04.360880   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:04.360913   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:05.287780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.288971   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:07.100417   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:09.602529   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:08.149450   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:10.647672   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:06.899950   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:06.912053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:06.912124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:06.945726   60176 cri.go:89] found id: ""
	I0725 18:52:06.945752   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.945761   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:06.945766   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:06.945824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:06.979170   60176 cri.go:89] found id: ""
	I0725 18:52:06.979200   60176 logs.go:276] 0 containers: []
	W0725 18:52:06.979210   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:06.979217   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:06.979279   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:07.009633   60176 cri.go:89] found id: ""
	I0725 18:52:07.009661   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.009670   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:07.009675   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:07.009735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:07.042022   60176 cri.go:89] found id: ""
	I0725 18:52:07.042045   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.042054   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:07.042061   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:07.042121   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:07.074755   60176 cri.go:89] found id: ""
	I0725 18:52:07.074779   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.074787   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:07.074792   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:07.074853   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:07.109421   60176 cri.go:89] found id: ""
	I0725 18:52:07.109447   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.109457   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:07.109464   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:07.109522   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:07.144848   60176 cri.go:89] found id: ""
	I0725 18:52:07.144879   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.144889   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:07.144897   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:07.144956   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:07.182129   60176 cri.go:89] found id: ""
	I0725 18:52:07.182157   60176 logs.go:276] 0 containers: []
	W0725 18:52:07.182169   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:07.182178   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:07.182192   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:07.235471   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:07.235509   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:07.251999   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:07.252025   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:07.334671   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:07.334691   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:07.334703   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:07.415819   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:07.415853   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.953603   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:09.966281   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:09.966362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:09.998237   60176 cri.go:89] found id: ""
	I0725 18:52:09.998259   60176 logs.go:276] 0 containers: []
	W0725 18:52:09.998267   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:09.998272   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:09.998332   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:10.030191   60176 cri.go:89] found id: ""
	I0725 18:52:10.030213   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.030220   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:10.030228   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:10.030273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:10.062117   60176 cri.go:89] found id: ""
	I0725 18:52:10.062144   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.062154   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:10.062159   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:10.062208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:10.093801   60176 cri.go:89] found id: ""
	I0725 18:52:10.093831   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.093841   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:10.093848   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:10.093911   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:10.125705   60176 cri.go:89] found id: ""
	I0725 18:52:10.125731   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.125741   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:10.125748   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:10.125814   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:10.158731   60176 cri.go:89] found id: ""
	I0725 18:52:10.158753   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.158761   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:10.158766   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:10.158810   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:10.190408   60176 cri.go:89] found id: ""
	I0725 18:52:10.190435   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.190443   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:10.190449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:10.190503   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:10.221937   60176 cri.go:89] found id: ""
	I0725 18:52:10.221967   60176 logs.go:276] 0 containers: []
	W0725 18:52:10.221977   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:10.221992   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:10.222007   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:10.270299   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:10.270332   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:10.283787   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:10.283823   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:10.358121   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:10.358146   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:10.358163   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:10.437607   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:10.437643   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:09.786088   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:11.786251   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:13.786457   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.099676   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.600380   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.647922   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:14.648433   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:12.978064   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:12.995812   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:12.995868   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:13.041196   60176 cri.go:89] found id: ""
	I0725 18:52:13.041222   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.041231   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:13.041239   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:13.041290   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:13.074981   60176 cri.go:89] found id: ""
	I0725 18:52:13.075005   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.075013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:13.075018   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:13.075078   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:13.108689   60176 cri.go:89] found id: ""
	I0725 18:52:13.108714   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.108725   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:13.108732   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:13.108788   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:13.144876   60176 cri.go:89] found id: ""
	I0725 18:52:13.144903   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.144913   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:13.144920   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:13.145008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:13.177912   60176 cri.go:89] found id: ""
	I0725 18:52:13.177936   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.177943   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:13.177949   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:13.178004   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:13.208752   60176 cri.go:89] found id: ""
	I0725 18:52:13.208783   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.208794   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:13.208802   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:13.208861   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:13.240146   60176 cri.go:89] found id: ""
	I0725 18:52:13.240181   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.240191   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:13.240197   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:13.240265   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:13.276749   60176 cri.go:89] found id: ""
	I0725 18:52:13.276775   60176 logs.go:276] 0 containers: []
	W0725 18:52:13.276783   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:13.276793   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:13.276808   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:13.342307   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:13.342341   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:13.342358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:13.426659   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:13.426691   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:13.462986   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:13.463014   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:13.513921   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:13.513956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.028587   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:16.041712   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:16.041771   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:16.074562   60176 cri.go:89] found id: ""
	I0725 18:52:16.074593   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.074603   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:16.074611   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:16.074668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:16.110581   60176 cri.go:89] found id: ""
	I0725 18:52:16.110610   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.110620   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:16.110627   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:16.110686   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:16.145233   60176 cri.go:89] found id: ""
	I0725 18:52:16.145256   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.145266   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:16.145274   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:16.145333   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:16.180032   60176 cri.go:89] found id: ""
	I0725 18:52:16.180059   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.180070   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:16.180084   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:16.180147   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:16.211984   60176 cri.go:89] found id: ""
	I0725 18:52:16.212013   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.212021   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:16.212028   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:16.212086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:16.243930   60176 cri.go:89] found id: ""
	I0725 18:52:16.243958   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.243965   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:16.243970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:16.244018   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:16.276858   60176 cri.go:89] found id: ""
	I0725 18:52:16.276886   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.276895   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:16.276903   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:16.276964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:16.309039   60176 cri.go:89] found id: ""
	I0725 18:52:16.309068   60176 logs.go:276] 0 containers: []
	W0725 18:52:16.309079   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:16.309089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:16.309103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:16.358664   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:16.358699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:16.371701   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:16.371733   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:52:15.786767   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.787058   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.099941   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.100836   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:17.148099   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:19.150035   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:52:16.440851   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:16.440877   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:16.440892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:16.515546   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:16.515581   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.053916   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:19.067831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:19.067899   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:19.100740   60176 cri.go:89] found id: ""
	I0725 18:52:19.100765   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.100776   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:19.100783   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:19.100844   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:19.137247   60176 cri.go:89] found id: ""
	I0725 18:52:19.137272   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.137279   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:19.137284   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:19.137348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:19.181550   60176 cri.go:89] found id: ""
	I0725 18:52:19.181582   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.181594   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:19.181601   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:19.181666   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:19.215392   60176 cri.go:89] found id: ""
	I0725 18:52:19.215418   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.215427   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:19.215433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:19.215495   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:19.247896   60176 cri.go:89] found id: ""
	I0725 18:52:19.247923   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.247933   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:19.247940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:19.248001   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:19.285250   60176 cri.go:89] found id: ""
	I0725 18:52:19.285276   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.285286   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:19.285293   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:19.285362   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:19.323470   60176 cri.go:89] found id: ""
	I0725 18:52:19.323500   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.323510   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:19.323518   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:19.323583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:19.358435   60176 cri.go:89] found id: ""
	I0725 18:52:19.358458   60176 logs.go:276] 0 containers: []
	W0725 18:52:19.358466   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:19.358475   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:19.358491   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:19.422806   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:19.422825   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:19.422837   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:19.504316   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:19.504370   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:19.543929   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:19.543956   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:19.596268   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:19.596300   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:20.286982   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.287235   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.601342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.099874   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:21.648118   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:24.147655   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.148904   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:22.110193   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:22.123411   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:22.123472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:22.158539   60176 cri.go:89] found id: ""
	I0725 18:52:22.158577   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.158588   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:22.158595   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:22.158654   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:22.196231   60176 cri.go:89] found id: ""
	I0725 18:52:22.196260   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.196270   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:22.196277   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:22.196354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:22.233119   60176 cri.go:89] found id: ""
	I0725 18:52:22.233150   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.233160   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:22.233167   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:22.233231   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:22.265273   60176 cri.go:89] found id: ""
	I0725 18:52:22.265302   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.265312   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:22.265322   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:22.265384   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:22.298933   60176 cri.go:89] found id: ""
	I0725 18:52:22.298959   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.298968   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:22.298982   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:22.299055   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:22.330841   60176 cri.go:89] found id: ""
	I0725 18:52:22.330877   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.330888   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:22.330896   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:22.330965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:22.363717   60176 cri.go:89] found id: ""
	I0725 18:52:22.363743   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.363753   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:22.363760   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:22.363818   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:22.398672   60176 cri.go:89] found id: ""
	I0725 18:52:22.398701   60176 logs.go:276] 0 containers: []
	W0725 18:52:22.398711   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:22.398722   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:22.398739   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:22.452774   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:22.452807   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:22.465478   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:22.465507   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:22.538473   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:22.538492   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:22.538504   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:22.622982   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:22.623029   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:25.163174   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:25.176183   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:25.176242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:25.212455   60176 cri.go:89] found id: ""
	I0725 18:52:25.212488   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.212497   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:25.212504   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:25.212558   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:25.249901   60176 cri.go:89] found id: ""
	I0725 18:52:25.249930   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.249938   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:25.249943   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:25.250002   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:25.284400   60176 cri.go:89] found id: ""
	I0725 18:52:25.284425   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.284435   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:25.284443   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:25.284510   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:25.322175   60176 cri.go:89] found id: ""
	I0725 18:52:25.322199   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.322208   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:25.322214   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:25.322274   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:25.358579   60176 cri.go:89] found id: ""
	I0725 18:52:25.358606   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.358613   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:25.358618   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:25.358668   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:25.393516   60176 cri.go:89] found id: ""
	I0725 18:52:25.393541   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.393552   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:25.393559   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:25.393619   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:25.426256   60176 cri.go:89] found id: ""
	I0725 18:52:25.426284   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.426293   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:25.426300   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:25.426386   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:25.460227   60176 cri.go:89] found id: ""
	I0725 18:52:25.460249   60176 logs.go:276] 0 containers: []
	W0725 18:52:25.460257   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:25.460265   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:25.460276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:25.512461   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:25.512494   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:25.526304   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:25.526347   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:25.597593   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:25.597618   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:25.597634   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:25.674233   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:25.674269   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:24.787536   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:27.286447   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:26.100033   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.599703   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.648517   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:30.650728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:28.209473   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:28.223161   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:28.223226   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:28.260471   60176 cri.go:89] found id: ""
	I0725 18:52:28.260500   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.260510   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:28.260517   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:28.260578   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:28.296055   60176 cri.go:89] found id: ""
	I0725 18:52:28.296093   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.296109   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:28.296117   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:28.296179   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:28.327790   60176 cri.go:89] found id: ""
	I0725 18:52:28.327819   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.327830   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:28.327836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:28.327896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:28.359967   60176 cri.go:89] found id: ""
	I0725 18:52:28.359994   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.360005   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:28.360012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:28.360076   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:28.394025   60176 cri.go:89] found id: ""
	I0725 18:52:28.394057   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.394065   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:28.394070   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:28.394119   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:28.425844   60176 cri.go:89] found id: ""
	I0725 18:52:28.425866   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.425874   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:28.425881   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:28.425952   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:28.459239   60176 cri.go:89] found id: ""
	I0725 18:52:28.459266   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.459276   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:28.459283   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:28.459355   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:28.493964   60176 cri.go:89] found id: ""
	I0725 18:52:28.493992   60176 logs.go:276] 0 containers: []
	W0725 18:52:28.494004   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:28.494015   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:28.494030   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:28.543108   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:28.543138   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:28.556408   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:28.556440   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:28.622780   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:28.622802   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:28.622815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:28.705901   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:28.705935   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.247642   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:31.260467   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:31.260536   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:31.293280   60176 cri.go:89] found id: ""
	I0725 18:52:31.293303   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.293311   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:31.293316   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:31.293372   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:31.325186   60176 cri.go:89] found id: ""
	I0725 18:52:31.325220   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.325232   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:31.325238   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:31.325295   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:31.359715   60176 cri.go:89] found id: ""
	I0725 18:52:31.359744   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.359756   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:31.359763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:31.359821   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:29.287628   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.787471   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.099921   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.600091   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:33.147181   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:35.147612   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:31.396998   60176 cri.go:89] found id: ""
	I0725 18:52:31.397031   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.397043   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:31.397051   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:31.397107   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:31.430896   60176 cri.go:89] found id: ""
	I0725 18:52:31.430921   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.430934   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:31.430941   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:31.430993   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:31.464746   60176 cri.go:89] found id: ""
	I0725 18:52:31.464775   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.464785   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:31.464791   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:31.464856   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:31.500645   60176 cri.go:89] found id: ""
	I0725 18:52:31.500668   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.500677   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:31.500682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:31.500730   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:31.534394   60176 cri.go:89] found id: ""
	I0725 18:52:31.534418   60176 logs.go:276] 0 containers: []
	W0725 18:52:31.534427   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:31.534434   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:31.534446   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:31.615633   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:31.615667   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:31.657138   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:31.657164   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:31.707872   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:31.707907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:31.721076   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:31.721100   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:31.787451   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.288248   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:34.301172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:34.301230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:34.333115   60176 cri.go:89] found id: ""
	I0725 18:52:34.333143   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.333153   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:34.333159   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:34.333206   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:34.368762   60176 cri.go:89] found id: ""
	I0725 18:52:34.368794   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.368805   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:34.368812   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:34.368875   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:34.404655   60176 cri.go:89] found id: ""
	I0725 18:52:34.404681   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.404691   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:34.404699   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:34.404759   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:34.438034   60176 cri.go:89] found id: ""
	I0725 18:52:34.438058   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.438068   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:34.438075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:34.438134   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:34.472642   60176 cri.go:89] found id: ""
	I0725 18:52:34.472667   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.472678   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:34.472684   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:34.472744   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:34.511813   60176 cri.go:89] found id: ""
	I0725 18:52:34.511846   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.511858   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:34.511876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:34.511947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:34.544142   60176 cri.go:89] found id: ""
	I0725 18:52:34.544172   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.544183   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:34.544190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:34.544253   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:34.580404   60176 cri.go:89] found id: ""
	I0725 18:52:34.580428   60176 logs.go:276] 0 containers: []
	W0725 18:52:34.580439   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:34.580451   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:34.580468   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:34.620866   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:34.620892   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:34.675204   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:34.675237   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:34.688592   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:34.688616   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:34.760208   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:34.760234   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:34.760251   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:34.288570   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.786448   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.786936   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:36.099207   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:38.099682   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.100107   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.647899   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:40.147664   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:37.337593   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:37.353055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:37.353125   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:37.386957   60176 cri.go:89] found id: ""
	I0725 18:52:37.386985   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.386996   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:37.387003   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:37.387062   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:37.419464   60176 cri.go:89] found id: ""
	I0725 18:52:37.419489   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.419496   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:37.419501   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:37.419557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:37.452553   60176 cri.go:89] found id: ""
	I0725 18:52:37.452582   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.452592   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:37.452598   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:37.452660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:37.484946   60176 cri.go:89] found id: ""
	I0725 18:52:37.484971   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.484978   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:37.484983   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:37.485029   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:37.517509   60176 cri.go:89] found id: ""
	I0725 18:52:37.517535   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.517546   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:37.517554   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:37.517604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:37.549971   60176 cri.go:89] found id: ""
	I0725 18:52:37.549995   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.550003   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:37.550010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:37.550067   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:37.581630   60176 cri.go:89] found id: ""
	I0725 18:52:37.581661   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.581670   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:37.581676   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:37.581736   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:37.616677   60176 cri.go:89] found id: ""
	I0725 18:52:37.616705   60176 logs.go:276] 0 containers: []
	W0725 18:52:37.616714   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:37.616727   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:37.616741   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:37.630482   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:37.630517   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:37.699856   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:37.699883   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:37.699912   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:37.781132   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:37.781162   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:37.819877   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:37.819904   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.372910   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:40.385605   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:40.385672   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:40.420547   60176 cri.go:89] found id: ""
	I0725 18:52:40.420575   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.420586   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:40.420593   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:40.420642   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:40.455644   60176 cri.go:89] found id: ""
	I0725 18:52:40.455666   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.455674   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:40.455679   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:40.455735   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:40.486576   60176 cri.go:89] found id: ""
	I0725 18:52:40.486599   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.486607   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:40.486613   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:40.486661   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:40.520015   60176 cri.go:89] found id: ""
	I0725 18:52:40.520038   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.520046   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:40.520053   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:40.520115   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:40.550645   60176 cri.go:89] found id: ""
	I0725 18:52:40.550672   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.550680   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:40.550685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:40.550739   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:40.584736   60176 cri.go:89] found id: ""
	I0725 18:52:40.584759   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.584766   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:40.584771   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:40.584827   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:40.620112   60176 cri.go:89] found id: ""
	I0725 18:52:40.620140   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.620151   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:40.620158   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:40.620221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:40.660888   60176 cri.go:89] found id: ""
	I0725 18:52:40.660910   60176 logs.go:276] 0 containers: []
	W0725 18:52:40.660917   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:40.660926   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:40.660937   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:40.713935   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:40.713967   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:40.727194   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:40.727218   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:40.797362   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:40.797387   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:40.797408   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:40.878723   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:40.878756   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:41.286942   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.288780   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.600347   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:45.099379   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:42.148037   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:44.648236   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:43.421579   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:43.434054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:43.434113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:43.468844   60176 cri.go:89] found id: ""
	I0725 18:52:43.468870   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.468880   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:43.468887   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:43.468948   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:43.501075   60176 cri.go:89] found id: ""
	I0725 18:52:43.501102   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.501113   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:43.501120   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:43.501175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:43.533347   60176 cri.go:89] found id: ""
	I0725 18:52:43.533372   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.533381   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:43.533387   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:43.533439   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:43.569764   60176 cri.go:89] found id: ""
	I0725 18:52:43.569787   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.569795   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:43.569801   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:43.569851   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:43.604897   60176 cri.go:89] found id: ""
	I0725 18:52:43.604924   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.604935   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:43.604942   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:43.604999   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:43.638584   60176 cri.go:89] found id: ""
	I0725 18:52:43.638621   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.638633   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:43.638640   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:43.638691   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:43.672302   60176 cri.go:89] found id: ""
	I0725 18:52:43.672348   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.672359   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:43.672366   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:43.672425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:43.708589   60176 cri.go:89] found id: ""
	I0725 18:52:43.708620   60176 logs.go:276] 0 containers: []
	W0725 18:52:43.708630   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:43.708641   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:43.708660   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:43.761733   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:43.761766   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:43.775233   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:43.775258   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:43.840767   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:43.840788   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:43.840803   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:43.914698   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:43.914730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:45.786511   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.787882   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.100130   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.600576   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:47.147728   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:49.648227   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:46.451913   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:46.465836   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:46.465896   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:46.499330   60176 cri.go:89] found id: ""
	I0725 18:52:46.499359   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.499369   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:46.499381   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:46.499446   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:46.537724   60176 cri.go:89] found id: ""
	I0725 18:52:46.537748   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.537758   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:46.537764   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:46.537825   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:46.568410   60176 cri.go:89] found id: ""
	I0725 18:52:46.568437   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.568446   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:46.568453   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:46.568519   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:46.599497   60176 cri.go:89] found id: ""
	I0725 18:52:46.599525   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.599535   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:46.599542   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:46.599607   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:46.631388   60176 cri.go:89] found id: ""
	I0725 18:52:46.631418   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.631427   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:46.631433   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:46.631489   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:46.670666   60176 cri.go:89] found id: ""
	I0725 18:52:46.670688   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.670695   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:46.670701   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:46.670756   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:46.702825   60176 cri.go:89] found id: ""
	I0725 18:52:46.702862   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.702874   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:46.702883   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:46.702947   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:46.738431   60176 cri.go:89] found id: ""
	I0725 18:52:46.738459   60176 logs.go:276] 0 containers: []
	W0725 18:52:46.738469   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:46.738479   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:46.738493   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:46.796704   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:46.796748   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:46.812042   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:46.812072   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:46.884905   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:46.884927   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:46.884942   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:46.965733   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:46.965773   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.505190   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:49.519648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:49.519733   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:49.559027   60176 cri.go:89] found id: ""
	I0725 18:52:49.559057   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.559064   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:49.559072   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:49.559124   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:49.591468   60176 cri.go:89] found id: ""
	I0725 18:52:49.591489   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.591497   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:49.591503   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:49.591557   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:49.629091   60176 cri.go:89] found id: ""
	I0725 18:52:49.629120   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.629129   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:49.629135   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:49.629199   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:49.664584   60176 cri.go:89] found id: ""
	I0725 18:52:49.664621   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.664633   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:49.664641   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:49.664693   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:49.695208   60176 cri.go:89] found id: ""
	I0725 18:52:49.695237   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.695247   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:49.695258   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:49.695323   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:49.726260   60176 cri.go:89] found id: ""
	I0725 18:52:49.726288   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.726299   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:49.726306   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:49.726468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:49.759938   60176 cri.go:89] found id: ""
	I0725 18:52:49.759969   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.759981   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:49.759990   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:49.760043   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:49.794113   60176 cri.go:89] found id: ""
	I0725 18:52:49.794142   60176 logs.go:276] 0 containers: []
	W0725 18:52:49.794153   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:49.794164   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:49.794178   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:49.834409   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:49.834443   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:49.890684   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:49.890730   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:49.904504   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:49.904534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:49.971482   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:49.971508   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:49.971523   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:50.286712   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.786827   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.099988   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.600144   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.147545   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:54.147590   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:56.148752   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:52.552586   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:52.564658   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:52.564732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:52.604434   60176 cri.go:89] found id: ""
	I0725 18:52:52.604460   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.604470   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:52.604478   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:52.604532   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:52.638870   60176 cri.go:89] found id: ""
	I0725 18:52:52.638893   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.638907   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:52.638914   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:52.638973   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:52.670494   60176 cri.go:89] found id: ""
	I0725 18:52:52.670521   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.670531   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:52.670538   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:52.670604   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:52.702250   60176 cri.go:89] found id: ""
	I0725 18:52:52.702282   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.702291   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:52.702298   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:52.702360   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:52.734144   60176 cri.go:89] found id: ""
	I0725 18:52:52.734172   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.734181   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:52.734187   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:52.734241   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:52.767581   60176 cri.go:89] found id: ""
	I0725 18:52:52.767606   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.767617   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:52.767624   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:52.767687   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:52.798874   60176 cri.go:89] found id: ""
	I0725 18:52:52.798895   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.798903   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:52.798908   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:52.798965   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:52.829237   60176 cri.go:89] found id: ""
	I0725 18:52:52.829266   60176 logs.go:276] 0 containers: []
	W0725 18:52:52.829276   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:52.829287   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:52.829309   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:52.879820   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:52.879856   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:52.893453   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:52.893477   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:52.962899   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:52.962925   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:52.962944   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:53.042202   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:53.042234   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.581146   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:55.594458   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:55.594529   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:55.628122   60176 cri.go:89] found id: ""
	I0725 18:52:55.628152   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.628163   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:55.628170   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:55.628240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:55.661098   60176 cri.go:89] found id: ""
	I0725 18:52:55.661126   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.661137   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:55.661143   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:55.661195   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:55.694635   60176 cri.go:89] found id: ""
	I0725 18:52:55.694664   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.694675   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:55.694682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:55.694746   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:55.728875   60176 cri.go:89] found id: ""
	I0725 18:52:55.728902   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.728912   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:55.728924   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:55.728986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:55.764386   60176 cri.go:89] found id: ""
	I0725 18:52:55.764414   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.764423   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:55.764430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:55.764487   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:55.798285   60176 cri.go:89] found id: ""
	I0725 18:52:55.798335   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.798348   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:55.798355   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:55.798407   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:55.833049   60176 cri.go:89] found id: ""
	I0725 18:52:55.833076   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.833083   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:55.833088   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:55.833144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:55.872278   60176 cri.go:89] found id: ""
	I0725 18:52:55.872310   60176 logs.go:276] 0 containers: []
	W0725 18:52:55.872335   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:55.872347   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:55.872362   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:55.908301   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:55.908344   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:55.960197   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:55.960230   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:55.973912   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:55.973941   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:56.042103   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:56.042128   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:56.042141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:54.787516   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.286820   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:57.099342   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:59.099712   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.647566   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:00.647721   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:52:58.618832   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:52:58.631315   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:52:58.631374   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:52:58.666492   60176 cri.go:89] found id: ""
	I0725 18:52:58.666521   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.666532   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:52:58.666540   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:52:58.666608   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:52:58.700391   60176 cri.go:89] found id: ""
	I0725 18:52:58.700421   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.700431   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:52:58.700450   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:52:58.700518   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:52:58.734582   60176 cri.go:89] found id: ""
	I0725 18:52:58.734608   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.734617   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:52:58.734621   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:52:58.734692   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:52:58.767777   60176 cri.go:89] found id: ""
	I0725 18:52:58.767806   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.767817   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:52:58.767823   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:52:58.767886   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:52:58.801021   60176 cri.go:89] found id: ""
	I0725 18:52:58.801046   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.801053   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:52:58.801058   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:52:58.801102   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:52:58.833191   60176 cri.go:89] found id: ""
	I0725 18:52:58.833223   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.833231   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:52:58.833236   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:52:58.833284   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:52:58.864805   60176 cri.go:89] found id: ""
	I0725 18:52:58.864839   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.864849   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:52:58.864854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:52:58.864916   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:52:58.896342   60176 cri.go:89] found id: ""
	I0725 18:52:58.896373   60176 logs.go:276] 0 containers: []
	W0725 18:52:58.896384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:52:58.896396   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:52:58.896415   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:52:58.950614   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:52:58.950652   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:52:58.974026   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:52:58.974063   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:52:59.056282   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:52:59.056305   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:52:59.056349   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:52:59.138254   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:52:59.138292   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:52:59.785805   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.787477   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.099859   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.604940   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:03.147177   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:05.147885   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:01.680405   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:01.693093   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:01.693161   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:01.725456   60176 cri.go:89] found id: ""
	I0725 18:53:01.725483   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.725494   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:01.725501   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:01.725562   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:01.757644   60176 cri.go:89] found id: ""
	I0725 18:53:01.757677   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.757688   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:01.757694   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:01.757765   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:01.793640   60176 cri.go:89] found id: ""
	I0725 18:53:01.793660   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.793667   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:01.793672   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:01.793718   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:01.829336   60176 cri.go:89] found id: ""
	I0725 18:53:01.829368   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.829379   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:01.829386   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:01.829442   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:01.864597   60176 cri.go:89] found id: ""
	I0725 18:53:01.864625   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.864636   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:01.864643   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:01.864704   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:01.895962   60176 cri.go:89] found id: ""
	I0725 18:53:01.895990   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.896001   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:01.896012   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:01.896070   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:01.926426   60176 cri.go:89] found id: ""
	I0725 18:53:01.926451   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.926459   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:01.926463   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:01.926517   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:01.957722   60176 cri.go:89] found id: ""
	I0725 18:53:01.957746   60176 logs.go:276] 0 containers: []
	W0725 18:53:01.957755   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:01.957764   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:01.957779   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:02.012061   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:02.012096   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:02.025396   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:02.025423   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:02.088683   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:02.088706   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:02.088718   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:02.170941   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:02.170974   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.713619   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:04.734911   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:04.734970   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:04.793399   60176 cri.go:89] found id: ""
	I0725 18:53:04.793427   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.793438   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:04.793445   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:04.793493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:04.823679   60176 cri.go:89] found id: ""
	I0725 18:53:04.823711   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.823723   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:04.823729   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:04.823793   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:04.854922   60176 cri.go:89] found id: ""
	I0725 18:53:04.854957   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.854964   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:04.854970   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:04.855023   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:04.886913   60176 cri.go:89] found id: ""
	I0725 18:53:04.886937   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.886945   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:04.886953   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:04.887008   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:04.919868   60176 cri.go:89] found id: ""
	I0725 18:53:04.919896   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.919907   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:04.919914   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:04.919979   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:04.953542   60176 cri.go:89] found id: ""
	I0725 18:53:04.953571   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.953581   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:04.953588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:04.953649   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:04.986901   60176 cri.go:89] found id: ""
	I0725 18:53:04.986925   60176 logs.go:276] 0 containers: []
	W0725 18:53:04.986932   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:04.986937   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:04.986986   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:05.020084   60176 cri.go:89] found id: ""
	I0725 18:53:05.020124   60176 logs.go:276] 0 containers: []
	W0725 18:53:05.020133   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:05.020141   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:05.020153   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:05.075512   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:05.075544   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:05.089227   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:05.089256   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:05.155689   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:05.155707   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:05.155719   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:05.230252   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:05.230286   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:04.286327   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.286366   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.287693   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:06.099267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:08.100754   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:10.599173   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.148931   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:09.647549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:07.770919   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:07.784196   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:07.784354   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:07.817549   60176 cri.go:89] found id: ""
	I0725 18:53:07.817581   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.817593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:07.817601   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:07.817674   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:07.852853   60176 cri.go:89] found id: ""
	I0725 18:53:07.852876   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.852883   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:07.852889   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:07.852941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:07.890344   60176 cri.go:89] found id: ""
	I0725 18:53:07.890370   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.890377   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:07.890383   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:07.890443   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:07.921718   60176 cri.go:89] found id: ""
	I0725 18:53:07.921749   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.921760   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:07.921768   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:07.921824   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:07.955721   60176 cri.go:89] found id: ""
	I0725 18:53:07.955753   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.955763   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:07.955769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:07.955820   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:07.987760   60176 cri.go:89] found id: ""
	I0725 18:53:07.987789   60176 logs.go:276] 0 containers: []
	W0725 18:53:07.987799   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:07.987806   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:07.987878   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:08.020881   60176 cri.go:89] found id: ""
	I0725 18:53:08.020912   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.020922   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:08.020929   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:08.020994   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:08.053983   60176 cri.go:89] found id: ""
	I0725 18:53:08.054013   60176 logs.go:276] 0 containers: []
	W0725 18:53:08.054025   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:08.054037   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:08.054053   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:08.134954   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:08.134996   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:08.177056   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:08.177085   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:08.229080   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:08.229121   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:08.242211   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:08.242242   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:08.305979   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:10.806662   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:10.819111   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:10.819172   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:10.854609   60176 cri.go:89] found id: ""
	I0725 18:53:10.854639   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.854652   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:10.854660   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:10.854743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:10.893436   60176 cri.go:89] found id: ""
	I0725 18:53:10.893466   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.893478   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:10.893486   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:10.893555   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:10.927410   60176 cri.go:89] found id: ""
	I0725 18:53:10.927435   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.927444   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:10.927449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:10.927520   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:10.958061   60176 cri.go:89] found id: ""
	I0725 18:53:10.958082   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.958090   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:10.958095   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:10.958149   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:10.988781   60176 cri.go:89] found id: ""
	I0725 18:53:10.988812   60176 logs.go:276] 0 containers: []
	W0725 18:53:10.988824   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:10.988831   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:10.988892   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:11.021096   60176 cri.go:89] found id: ""
	I0725 18:53:11.021126   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.021137   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:11.021145   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:11.021204   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:11.053320   60176 cri.go:89] found id: ""
	I0725 18:53:11.053355   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.053368   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:11.053377   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:11.053445   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:11.085018   60176 cri.go:89] found id: ""
	I0725 18:53:11.085046   60176 logs.go:276] 0 containers: []
	W0725 18:53:11.085055   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:11.085063   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:11.085074   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:11.136102   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:11.136139   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:11.150126   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:11.150154   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:11.219206   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:11.219226   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:11.219243   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:11.301501   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:11.301534   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:10.787076   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.287049   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.100296   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:15.598090   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:11.648889   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:14.148494   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.148801   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:13.840771   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:13.853763   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:13.853848   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:13.889060   60176 cri.go:89] found id: ""
	I0725 18:53:13.889089   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.889098   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:13.889105   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:13.889163   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:13.920861   60176 cri.go:89] found id: ""
	I0725 18:53:13.920889   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.920900   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:13.920910   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:13.920974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:13.952009   60176 cri.go:89] found id: ""
	I0725 18:53:13.952037   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.952048   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:13.952054   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:13.952117   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:13.985991   60176 cri.go:89] found id: ""
	I0725 18:53:13.986020   60176 logs.go:276] 0 containers: []
	W0725 18:53:13.986030   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:13.986036   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:13.986098   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:14.024968   60176 cri.go:89] found id: ""
	I0725 18:53:14.024995   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.025003   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:14.025008   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:14.025079   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:14.058861   60176 cri.go:89] found id: ""
	I0725 18:53:14.058886   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.058897   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:14.058912   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:14.058976   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:14.092587   60176 cri.go:89] found id: ""
	I0725 18:53:14.092613   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.092628   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:14.092634   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:14.092697   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:14.127085   60176 cri.go:89] found id: ""
	I0725 18:53:14.127115   60176 logs.go:276] 0 containers: []
	W0725 18:53:14.127124   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:14.127134   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:14.127148   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:14.179505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:14.179537   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:14.192813   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:14.192840   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:14.256564   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:14.256590   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:14.256604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:14.338570   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:14.338604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:15.287102   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.787128   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:17.599288   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:19.600086   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:18.648466   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:21.147558   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:16.877636   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:16.891131   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:16.891208   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:16.924210   60176 cri.go:89] found id: ""
	I0725 18:53:16.924243   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.924253   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:16.924261   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:16.924343   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:16.957212   60176 cri.go:89] found id: ""
	I0725 18:53:16.957240   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.957247   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:16.957254   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:16.957341   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:16.989205   60176 cri.go:89] found id: ""
	I0725 18:53:16.989236   60176 logs.go:276] 0 containers: []
	W0725 18:53:16.989244   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:16.989249   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:16.989298   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:17.025775   60176 cri.go:89] found id: ""
	I0725 18:53:17.025801   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.025812   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:17.025819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:17.025887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:17.059185   60176 cri.go:89] found id: ""
	I0725 18:53:17.059213   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.059223   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:17.059229   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:17.059275   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:17.090838   60176 cri.go:89] found id: ""
	I0725 18:53:17.090863   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.090871   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:17.090876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:17.090932   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:17.126012   60176 cri.go:89] found id: ""
	I0725 18:53:17.126036   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.126044   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:17.126048   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:17.126106   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:17.165369   60176 cri.go:89] found id: ""
	I0725 18:53:17.165394   60176 logs.go:276] 0 containers: []
	W0725 18:53:17.165405   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:17.165415   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:17.165436   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:17.178730   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:17.178771   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:17.251639   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:17.251666   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:17.251681   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:17.334840   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:17.334887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:17.380868   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:17.380895   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.931610   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:19.943864   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:19.943964   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:19.975865   60176 cri.go:89] found id: ""
	I0725 18:53:19.975893   60176 logs.go:276] 0 containers: []
	W0725 18:53:19.975904   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:19.975910   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:19.975975   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:20.010230   60176 cri.go:89] found id: ""
	I0725 18:53:20.010258   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.010268   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:20.010274   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:20.010321   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:20.042591   60176 cri.go:89] found id: ""
	I0725 18:53:20.042618   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.042626   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:20.042632   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:20.042680   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:20.073184   60176 cri.go:89] found id: ""
	I0725 18:53:20.073212   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.073224   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:20.073231   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:20.073286   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:20.106770   60176 cri.go:89] found id: ""
	I0725 18:53:20.106798   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.106810   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:20.106818   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:20.106888   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:20.141368   60176 cri.go:89] found id: ""
	I0725 18:53:20.141419   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.141429   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:20.141437   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:20.141496   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:20.174814   60176 cri.go:89] found id: ""
	I0725 18:53:20.174841   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.174852   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:20.174859   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:20.174918   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:20.208463   60176 cri.go:89] found id: ""
	I0725 18:53:20.208489   60176 logs.go:276] 0 containers: []
	W0725 18:53:20.208497   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:20.208505   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:20.208519   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:20.220843   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:20.220867   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:20.287846   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:20.287871   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:20.287887   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:20.362354   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:20.362391   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:20.399616   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:20.399650   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:19.790264   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.288082   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.100856   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:24.600029   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:23.148297   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:25.647615   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:22.950804   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:22.963553   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:22.963625   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:22.996193   60176 cri.go:89] found id: ""
	I0725 18:53:22.996215   60176 logs.go:276] 0 containers: []
	W0725 18:53:22.996222   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:22.996228   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:22.996273   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:23.029417   60176 cri.go:89] found id: ""
	I0725 18:53:23.029446   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.029455   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:23.029460   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:23.029508   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:23.062381   60176 cri.go:89] found id: ""
	I0725 18:53:23.062406   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.062414   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:23.062419   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:23.062471   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:23.093948   60176 cri.go:89] found id: ""
	I0725 18:53:23.093975   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.093987   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:23.093995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:23.094066   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:23.128049   60176 cri.go:89] found id: ""
	I0725 18:53:23.128076   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.128085   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:23.128091   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:23.128139   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:23.164593   60176 cri.go:89] found id: ""
	I0725 18:53:23.164617   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.164625   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:23.164631   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:23.164683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:23.197996   60176 cri.go:89] found id: ""
	I0725 18:53:23.198024   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.198032   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:23.198037   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:23.198087   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:23.233498   60176 cri.go:89] found id: ""
	I0725 18:53:23.233533   60176 logs.go:276] 0 containers: []
	W0725 18:53:23.233545   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:23.233565   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:23.233580   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:23.287473   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:23.287506   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:23.300308   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:23.300358   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:23.368879   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:23.368906   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:23.368919   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:23.445420   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:23.445453   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:25.985626   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:25.997898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:25.997971   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:26.030558   60176 cri.go:89] found id: ""
	I0725 18:53:26.030584   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.030593   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:26.030599   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:26.030660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:26.067209   60176 cri.go:89] found id: ""
	I0725 18:53:26.067245   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.067256   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:26.067263   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:26.067348   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:26.100872   60176 cri.go:89] found id: ""
	I0725 18:53:26.100891   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.100897   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:26.100902   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:26.100949   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:26.135077   60176 cri.go:89] found id: ""
	I0725 18:53:26.135102   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.135110   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:26.135115   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:26.135175   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:26.171332   60176 cri.go:89] found id: ""
	I0725 18:53:26.171431   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.171445   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:26.171452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:26.171507   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:26.205883   60176 cri.go:89] found id: ""
	I0725 18:53:26.205912   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.205923   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:26.205930   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:26.205989   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:26.240407   60176 cri.go:89] found id: ""
	I0725 18:53:26.240436   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.240446   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:26.240452   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:26.240513   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:26.273041   60176 cri.go:89] found id: ""
	I0725 18:53:26.273068   60176 logs.go:276] 0 containers: []
	W0725 18:53:26.273078   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:26.273089   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:26.273103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:26.327783   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:26.327815   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:26.342925   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:26.342952   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:53:24.786526   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:26.786771   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:28.786890   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.100267   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.600204   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:27.648059   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:29.648771   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	W0725 18:53:26.412563   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:26.412589   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:26.412605   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:26.493182   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:26.493222   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.030816   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:29.044047   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:29.044104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:29.077288   60176 cri.go:89] found id: ""
	I0725 18:53:29.077335   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.077354   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:29.077362   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:29.077429   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:29.113350   60176 cri.go:89] found id: ""
	I0725 18:53:29.113383   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.113395   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:29.113402   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:29.113472   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:29.147123   60176 cri.go:89] found id: ""
	I0725 18:53:29.147151   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.147161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:29.147168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:29.147224   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:29.182248   60176 cri.go:89] found id: ""
	I0725 18:53:29.182279   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.182296   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:29.182304   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:29.182367   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:29.215750   60176 cri.go:89] found id: ""
	I0725 18:53:29.215777   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.215788   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:29.215795   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:29.215857   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:29.249001   60176 cri.go:89] found id: ""
	I0725 18:53:29.249027   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.249037   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:29.249044   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:29.249104   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:29.281774   60176 cri.go:89] found id: ""
	I0725 18:53:29.281802   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.281812   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:29.281819   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:29.281879   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:29.318703   60176 cri.go:89] found id: ""
	I0725 18:53:29.318728   60176 logs.go:276] 0 containers: []
	W0725 18:53:29.318736   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:29.318744   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:29.318760   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:29.398145   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:29.398170   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:29.398184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:29.474090   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:29.474126   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:29.510143   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:29.510216   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:29.562952   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:29.562988   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:30.787145   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.788031   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.099672   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.148832   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:34.647209   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:32.076743   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:32.090035   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:32.090108   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:32.123139   60176 cri.go:89] found id: ""
	I0725 18:53:32.123173   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.123184   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:32.123191   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:32.123255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:32.156337   60176 cri.go:89] found id: ""
	I0725 18:53:32.156363   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.156372   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:32.156378   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:32.156437   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:32.191566   60176 cri.go:89] found id: ""
	I0725 18:53:32.191597   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.191609   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:32.191617   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:32.191684   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:32.225480   60176 cri.go:89] found id: ""
	I0725 18:53:32.225519   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.225528   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:32.225535   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:32.225593   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:32.257129   60176 cri.go:89] found id: ""
	I0725 18:53:32.257160   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.257169   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:32.257175   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:32.257221   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:32.298142   60176 cri.go:89] found id: ""
	I0725 18:53:32.298171   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.298180   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:32.298190   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:32.298240   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:32.331052   60176 cri.go:89] found id: ""
	I0725 18:53:32.331081   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.331092   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:32.331098   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:32.331143   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:32.364841   60176 cri.go:89] found id: ""
	I0725 18:53:32.364871   60176 logs.go:276] 0 containers: []
	W0725 18:53:32.364882   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:32.364892   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:32.364907   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:32.417931   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:32.417970   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:32.432131   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:32.432159   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:32.499759   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:32.499784   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:32.499806   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:32.579140   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:32.579191   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:35.120647   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:35.133992   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:35.134084   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:35.172030   60176 cri.go:89] found id: ""
	I0725 18:53:35.172052   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.172061   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:35.172067   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:35.172123   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:35.207893   60176 cri.go:89] found id: ""
	I0725 18:53:35.207920   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.207930   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:35.207937   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:35.207991   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:35.241626   60176 cri.go:89] found id: ""
	I0725 18:53:35.241651   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.241661   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:35.241668   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:35.241732   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:35.274017   60176 cri.go:89] found id: ""
	I0725 18:53:35.274047   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.274058   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:35.274064   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:35.274129   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:35.308778   60176 cri.go:89] found id: ""
	I0725 18:53:35.308809   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.308820   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:35.308827   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:35.308890   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:35.341366   60176 cri.go:89] found id: ""
	I0725 18:53:35.341392   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.341400   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:35.341406   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:35.341461   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:35.373955   60176 cri.go:89] found id: ""
	I0725 18:53:35.373983   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.373994   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:35.374001   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:35.374058   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:35.404705   60176 cri.go:89] found id: ""
	I0725 18:53:35.404733   60176 logs.go:276] 0 containers: []
	W0725 18:53:35.404743   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:35.404755   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:35.404794   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:35.455009   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:35.455043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:35.469113   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:35.469141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:35.533466   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:35.533497   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:35.533514   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:35.608513   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:35.608546   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:34.789202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:37.287021   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.100385   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.100515   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:40.599540   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:36.647379   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.648503   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.147602   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:38.147415   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:38.159974   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:38.160032   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:38.191108   60176 cri.go:89] found id: ""
	I0725 18:53:38.191138   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.191150   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:38.191157   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:38.191207   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:38.223494   60176 cri.go:89] found id: ""
	I0725 18:53:38.223519   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.223527   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:38.223533   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:38.223583   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:38.254433   60176 cri.go:89] found id: ""
	I0725 18:53:38.254462   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.254473   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:38.254480   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:38.254546   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:38.286229   60176 cri.go:89] found id: ""
	I0725 18:53:38.286258   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.286268   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:38.286276   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:38.286339   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:38.323332   60176 cri.go:89] found id: ""
	I0725 18:53:38.323362   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.323371   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:38.323378   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:38.323441   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:38.356260   60176 cri.go:89] found id: ""
	I0725 18:53:38.356290   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.356301   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:38.356309   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:38.356383   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:38.388543   60176 cri.go:89] found id: ""
	I0725 18:53:38.388571   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.388582   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:38.388588   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:38.388660   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:38.424003   60176 cri.go:89] found id: ""
	I0725 18:53:38.424030   60176 logs.go:276] 0 containers: []
	W0725 18:53:38.424040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:38.424051   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:38.424065   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:38.474963   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:38.474995   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:38.488392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:38.488425   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:38.561922   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:38.561946   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:38.562116   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:38.646569   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:38.646604   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:41.190319   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:41.202314   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:41.202382   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:41.238344   60176 cri.go:89] found id: ""
	I0725 18:53:41.238370   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.238378   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:41.238383   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:41.238438   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:41.272219   60176 cri.go:89] found id: ""
	I0725 18:53:41.272252   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.272263   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:41.272271   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:41.272349   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:41.307125   60176 cri.go:89] found id: ""
	I0725 18:53:41.307151   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.307161   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:41.307168   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:41.307230   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:41.339277   60176 cri.go:89] found id: ""
	I0725 18:53:41.339307   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.339320   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:41.339328   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:41.339394   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:41.373989   60176 cri.go:89] found id: ""
	I0725 18:53:41.374103   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.374126   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:41.374136   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:41.374205   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:39.287244   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.287891   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.787538   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:42.600625   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.099276   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:43.647388   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:45.648749   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:41.404939   60176 cri.go:89] found id: ""
	I0725 18:53:41.404968   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.404979   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:41.404986   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:41.405050   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:41.436889   60176 cri.go:89] found id: ""
	I0725 18:53:41.436919   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.436931   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:41.436940   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:41.437009   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:41.468457   60176 cri.go:89] found id: ""
	I0725 18:53:41.468486   60176 logs.go:276] 0 containers: []
	W0725 18:53:41.468496   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:41.468506   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:41.468520   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:41.519499   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:41.519529   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:41.533653   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:41.533688   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:41.602134   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:41.602156   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:41.602171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:41.676181   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:41.676214   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.213932   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:44.226286   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:44.226352   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:44.258782   60176 cri.go:89] found id: ""
	I0725 18:53:44.258817   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.258829   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:44.258835   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:44.258887   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:44.308398   60176 cri.go:89] found id: ""
	I0725 18:53:44.308424   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.308432   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:44.308437   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:44.308499   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:44.339388   60176 cri.go:89] found id: ""
	I0725 18:53:44.339414   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.339424   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:44.339430   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:44.339493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:44.369635   60176 cri.go:89] found id: ""
	I0725 18:53:44.369669   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.369679   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:44.369685   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:44.369751   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:44.403834   60176 cri.go:89] found id: ""
	I0725 18:53:44.403859   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.403869   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:44.403876   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:44.403939   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:44.439172   60176 cri.go:89] found id: ""
	I0725 18:53:44.439204   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.439215   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:44.439222   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:44.439287   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:44.474638   60176 cri.go:89] found id: ""
	I0725 18:53:44.474664   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.474674   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:44.474681   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:44.474743   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:44.506205   60176 cri.go:89] found id: ""
	I0725 18:53:44.506226   60176 logs.go:276] 0 containers: []
	W0725 18:53:44.506233   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:44.506241   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:44.506253   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:44.587955   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:44.587994   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:44.626251   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:44.626276   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:44.679008   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:44.679040   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:44.691749   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:44.691776   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:44.763419   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:46.286260   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.287172   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.099923   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:49.600555   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:48.148223   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:50.648549   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:47.263738   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:47.275907   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:47.275974   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:47.313612   60176 cri.go:89] found id: ""
	I0725 18:53:47.313642   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.313651   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:47.313662   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:47.313727   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:47.345186   60176 cri.go:89] found id: ""
	I0725 18:53:47.345215   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.345226   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:47.345233   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:47.345304   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:47.378074   60176 cri.go:89] found id: ""
	I0725 18:53:47.378103   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.378114   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:47.378128   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:47.378188   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:47.407147   60176 cri.go:89] found id: ""
	I0725 18:53:47.407176   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.407186   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:47.407193   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:47.407255   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:47.437015   60176 cri.go:89] found id: ""
	I0725 18:53:47.437049   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.437061   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:47.437068   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:47.437153   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:47.469201   60176 cri.go:89] found id: ""
	I0725 18:53:47.469231   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.469241   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:47.469248   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:47.469331   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:47.501160   60176 cri.go:89] found id: ""
	I0725 18:53:47.501189   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.501199   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:47.501206   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:47.501264   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:47.535102   60176 cri.go:89] found id: ""
	I0725 18:53:47.535140   60176 logs.go:276] 0 containers: []
	W0725 18:53:47.535149   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:47.535159   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:47.535184   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:47.547568   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:47.547593   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:47.616025   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:47.616048   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:47.616062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:47.690450   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:47.690482   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:47.725553   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:47.725589   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.281640   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:50.295201   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:50.295272   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:50.331689   60176 cri.go:89] found id: ""
	I0725 18:53:50.331713   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.331721   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:50.331726   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:50.331770   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:50.362392   60176 cri.go:89] found id: ""
	I0725 18:53:50.362422   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.362434   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:50.362441   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:50.362505   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:50.393410   60176 cri.go:89] found id: ""
	I0725 18:53:50.393433   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.393441   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:50.393449   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:50.393493   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:50.425041   60176 cri.go:89] found id: ""
	I0725 18:53:50.425068   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.425079   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:50.425085   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:50.425144   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:50.461533   60176 cri.go:89] found id: ""
	I0725 18:53:50.461556   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.461563   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:50.461568   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:50.461614   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:50.494395   60176 cri.go:89] found id: ""
	I0725 18:53:50.494417   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.494425   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:50.494431   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:50.494485   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:50.528639   60176 cri.go:89] found id: ""
	I0725 18:53:50.528663   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.528672   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:50.528678   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:50.528724   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:50.562007   60176 cri.go:89] found id: ""
	I0725 18:53:50.562032   60176 logs.go:276] 0 containers: []
	W0725 18:53:50.562040   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:50.562049   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:50.562062   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:50.612107   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:50.612141   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:50.624516   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:50.624540   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:50.724772   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:50.724799   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:50.724818   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:50.813891   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:50.813924   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:50.288626   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.786395   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:52.100268   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:54.598939   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.147764   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:55.147940   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:53.352629   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:53.366863   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:53.366941   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:53.401238   60176 cri.go:89] found id: ""
	I0725 18:53:53.401266   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.401277   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:53.401284   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:53.401351   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:53.434133   60176 cri.go:89] found id: ""
	I0725 18:53:53.434166   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.434178   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:53.434186   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:53.434248   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:53.470135   60176 cri.go:89] found id: ""
	I0725 18:53:53.470157   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.470165   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:53.470170   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:53.470217   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:53.512591   60176 cri.go:89] found id: ""
	I0725 18:53:53.512613   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.512621   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:53.512626   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:53.512683   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:53.544476   60176 cri.go:89] found id: ""
	I0725 18:53:53.544506   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.544517   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:53.544524   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:53.544591   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:53.577697   60176 cri.go:89] found id: ""
	I0725 18:53:53.577727   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.577746   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:53.577753   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:53.577816   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:53.610729   60176 cri.go:89] found id: ""
	I0725 18:53:53.610754   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.610761   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:53.610769   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:53.610817   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:53.645127   60176 cri.go:89] found id: ""
	I0725 18:53:53.645154   60176 logs.go:276] 0 containers: []
	W0725 18:53:53.645164   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:53.645174   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:53.645188   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:53.694575   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:53.694608   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:53.707931   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:53.707958   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:53.778423   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:53.778446   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:53.778460   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:53.860424   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:53.860458   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:55.286806   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.288524   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.600953   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:59.099301   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:57.647861   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:00.148873   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:53:56.400993   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:56.418757   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:56.418834   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:56.466300   60176 cri.go:89] found id: ""
	I0725 18:53:56.466330   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.466340   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:56.466348   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:56.466409   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:56.523080   60176 cri.go:89] found id: ""
	I0725 18:53:56.523107   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.523117   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:56.523124   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:56.523184   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:56.554854   60176 cri.go:89] found id: ""
	I0725 18:53:56.554881   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.554891   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:56.554898   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:56.554953   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:56.588851   60176 cri.go:89] found id: ""
	I0725 18:53:56.588876   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.588885   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:56.588892   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:56.588958   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:56.623818   60176 cri.go:89] found id: ""
	I0725 18:53:56.623840   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.623849   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:56.623854   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:56.623902   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:56.658958   60176 cri.go:89] found id: ""
	I0725 18:53:56.658982   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.658990   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:56.658996   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:56.659044   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:56.694689   60176 cri.go:89] found id: ""
	I0725 18:53:56.694715   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.694724   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:56.694729   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:56.694780   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:56.728038   60176 cri.go:89] found id: ""
	I0725 18:53:56.728067   60176 logs.go:276] 0 containers: []
	W0725 18:53:56.728077   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:56.728088   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:56.728103   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:56.805628   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:56.805657   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:56.805672   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:56.886168   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:56.886210   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:56.923004   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:56.923043   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:56.975693   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:56.975729   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.491244   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:53:59.503301   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:53:59.503363   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:53:59.540674   60176 cri.go:89] found id: ""
	I0725 18:53:59.540699   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.540707   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:53:59.540712   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:53:59.540763   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:53:59.575145   60176 cri.go:89] found id: ""
	I0725 18:53:59.575182   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.575192   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:53:59.575199   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:53:59.575260   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:53:59.606952   60176 cri.go:89] found id: ""
	I0725 18:53:59.606978   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.606989   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:53:59.606995   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:53:59.607056   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:53:59.645110   60176 cri.go:89] found id: ""
	I0725 18:53:59.645136   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.645147   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:53:59.645155   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:53:59.645218   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:53:59.676479   60176 cri.go:89] found id: ""
	I0725 18:53:59.676499   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.676507   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:53:59.676512   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:53:59.676581   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:53:59.707454   60176 cri.go:89] found id: ""
	I0725 18:53:59.707482   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.707493   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:53:59.707500   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:53:59.707575   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:53:59.740387   60176 cri.go:89] found id: ""
	I0725 18:53:59.740414   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.740421   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:53:59.740427   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:53:59.740474   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:53:59.774171   60176 cri.go:89] found id: ""
	I0725 18:53:59.774199   60176 logs.go:276] 0 containers: []
	W0725 18:53:59.774207   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:53:59.774216   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:53:59.774231   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:53:59.825138   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:53:59.825171   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:53:59.839715   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:53:59.839742   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:53:59.905645   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:53:59.905681   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:53:59.905699   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:53:59.980909   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:53:59.980943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:53:59.787202   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.286987   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:01.099490   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:03.100056   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.602329   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.647803   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:04.648473   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:02.524178   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:02.538055   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:02.538113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:02.576234   60176 cri.go:89] found id: ""
	I0725 18:54:02.576259   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.576268   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:02.576274   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:02.576340   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:02.607765   60176 cri.go:89] found id: ""
	I0725 18:54:02.607792   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.607803   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:02.607810   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:02.607865   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:02.640566   60176 cri.go:89] found id: ""
	I0725 18:54:02.640592   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.640601   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:02.640606   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:02.640655   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:02.673476   60176 cri.go:89] found id: ""
	I0725 18:54:02.673504   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.673512   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:02.673517   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:02.673565   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:02.706270   60176 cri.go:89] found id: ""
	I0725 18:54:02.706299   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.706309   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:02.706316   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:02.706376   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:02.737108   60176 cri.go:89] found id: ""
	I0725 18:54:02.737138   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.737146   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:02.737152   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:02.737200   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:02.775681   60176 cri.go:89] found id: ""
	I0725 18:54:02.775710   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.775719   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:02.775724   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:02.775773   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:02.808116   60176 cri.go:89] found id: ""
	I0725 18:54:02.808151   60176 logs.go:276] 0 containers: []
	W0725 18:54:02.808159   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:02.808169   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:02.808182   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:02.872505   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:02.872534   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:02.872557   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:02.948158   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:02.948193   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:02.982990   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:02.983020   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:03.031910   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:03.031943   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:05.545994   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:05.559105   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.559174   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.594106   60176 cri.go:89] found id: ""
	I0725 18:54:05.594134   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.594144   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:54:05.594151   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.594232   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.630148   60176 cri.go:89] found id: ""
	I0725 18:54:05.630172   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.630179   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:54:05.630185   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.630242   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.662968   60176 cri.go:89] found id: ""
	I0725 18:54:05.662993   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.663003   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:54:05.663010   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.663059   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.696645   60176 cri.go:89] found id: ""
	I0725 18:54:05.696668   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.696676   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:54:05.696682   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.696738   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:05.730027   60176 cri.go:89] found id: ""
	I0725 18:54:05.730050   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.730058   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:54:05.730063   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:05.730113   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:05.760918   60176 cri.go:89] found id: ""
	I0725 18:54:05.760946   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.760956   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:54:05.760968   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:05.761027   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:05.801025   60176 cri.go:89] found id: ""
	I0725 18:54:05.801057   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.801068   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:05.801075   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:54:05.801142   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:54:05.834567   60176 cri.go:89] found id: ""
	I0725 18:54:05.834594   60176 logs.go:276] 0 containers: []
	W0725 18:54:05.834605   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:54:05.834615   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:05.834630   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:54:05.903812   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:54:05.903840   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:05.903855   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:05.981642   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:54:05.981671   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.024246   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.024316   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.081777   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:06.081802   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:04.786654   59645 pod_ready.go:102] pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:05.786668   59645 pod_ready.go:81] duration metric: took 4m0.006258788s for pod "metrics-server-569cc877fc-5js8s" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:05.786698   59645 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:05.786708   59645 pod_ready.go:38] duration metric: took 4m6.551775292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:05.786726   59645 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:05.786754   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:05.786811   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:05.838362   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:05.838384   59645 cri.go:89] found id: ""
	I0725 18:54:05.838391   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:05.838441   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.843131   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:05.843190   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:05.882099   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:05.882125   59645 cri.go:89] found id: ""
	I0725 18:54:05.882134   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:05.882191   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.886383   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:05.886450   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:05.931971   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:05.932001   59645 cri.go:89] found id: ""
	I0725 18:54:05.932011   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:05.932069   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.936830   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:05.936891   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:05.976146   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:05.976171   59645 cri.go:89] found id: ""
	I0725 18:54:05.976179   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:05.976244   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:05.980878   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:05.980959   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:06.028640   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.028663   59645 cri.go:89] found id: ""
	I0725 18:54:06.028672   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:06.028720   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.033353   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:06.033411   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:06.072245   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.072269   59645 cri.go:89] found id: ""
	I0725 18:54:06.072279   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:06.072352   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.076614   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:06.076672   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:06.116418   59645 cri.go:89] found id: ""
	I0725 18:54:06.116443   59645 logs.go:276] 0 containers: []
	W0725 18:54:06.116453   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:06.116460   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:06.116520   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:06.154703   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:06.154725   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:06.154730   59645 cri.go:89] found id: ""
	I0725 18:54:06.154737   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:06.154795   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.158699   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:06.162190   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:06.162213   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:06.199003   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:06.199033   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:06.248171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:06.248208   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:06.774102   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:06.774139   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:06.815959   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:06.815984   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:06.872973   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:06.873013   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:06.915825   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:06.915858   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:06.958394   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:06.958423   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:06.993405   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:06.993437   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:07.026716   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:07.026745   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:07.040444   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:07.040474   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:07.156511   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:07.156541   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:07.191065   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:07.191091   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:08.099408   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:10.100363   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:07.148587   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:09.648368   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:08.598790   60176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:08.611234   60176 kubeadm.go:597] duration metric: took 4m4.357436643s to restartPrimaryControlPlane
	W0725 18:54:08.611305   60176 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 18:54:08.611343   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:54:13.076782   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.465409333s)
	I0725 18:54:13.076872   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:13.091089   60176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:54:13.102042   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:54:13.111117   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:54:13.111134   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:54:13.111171   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:54:13.119629   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:54:13.119676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:54:13.128676   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:54:13.136705   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:54:13.136761   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:54:13.145959   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.154628   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:54:13.154676   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:54:13.163164   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:54:13.171473   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:54:13.171552   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:54:13.179663   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:54:13.244923   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:54:13.245063   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:54:13.387687   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:54:13.387814   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:54:13.387941   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:54:13.566258   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:54:09.724251   59645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:09.740055   59645 api_server.go:72] duration metric: took 4m18.224261341s to wait for apiserver process to appear ...
	I0725 18:54:09.740086   59645 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:09.740125   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:09.740189   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:09.780027   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:09.780052   59645 cri.go:89] found id: ""
	I0725 18:54:09.780061   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:09.780121   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.784110   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:09.784170   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:09.821158   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:09.821177   59645 cri.go:89] found id: ""
	I0725 18:54:09.821185   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:09.821245   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.825235   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:09.825294   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:09.863880   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:09.863903   59645 cri.go:89] found id: ""
	I0725 18:54:09.863910   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:09.863956   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.868206   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:09.868260   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:09.902168   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:09.902191   59645 cri.go:89] found id: ""
	I0725 18:54:09.902200   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:09.902260   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.906583   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:09.906637   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:09.948980   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:09.948997   59645 cri.go:89] found id: ""
	I0725 18:54:09.949004   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:09.949061   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.953072   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:09.953135   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:09.987862   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:09.987891   59645 cri.go:89] found id: ""
	I0725 18:54:09.987901   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:09.987970   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:09.991893   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:09.991956   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:10.029171   59645 cri.go:89] found id: ""
	I0725 18:54:10.029201   59645 logs.go:276] 0 containers: []
	W0725 18:54:10.029212   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:10.029229   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:10.029298   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:10.069098   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.069123   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.069129   59645 cri.go:89] found id: ""
	I0725 18:54:10.069138   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:10.069185   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.073777   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:10.077625   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:10.077650   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:10.089863   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:10.089889   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:10.139865   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:10.139906   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:10.178236   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:10.178263   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:10.216425   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:10.216455   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:10.249818   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:10.249845   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:10.286603   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:10.286629   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:10.325189   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:10.325215   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:10.378752   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:10.378793   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:10.485922   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:10.485964   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:10.535583   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:10.535627   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:10.586930   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:10.586963   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:10.626295   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:10.626323   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.552874   59645 api_server.go:253] Checking apiserver healthz at https://192.168.50.221:8444/healthz ...
	I0725 18:54:13.558265   59645 api_server.go:279] https://192.168.50.221:8444/healthz returned 200:
	ok
	I0725 18:54:13.559439   59645 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:13.559459   59645 api_server.go:131] duration metric: took 3.819366874s to wait for apiserver health ...
	I0725 18:54:13.559467   59645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:13.559491   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:13.559539   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:13.597965   59645 cri.go:89] found id: "b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:13.597988   59645 cri.go:89] found id: ""
	I0725 18:54:13.597996   59645 logs.go:276] 1 containers: [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118]
	I0725 18:54:13.598050   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.602225   59645 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:13.602291   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:13.652885   59645 cri.go:89] found id: "45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:13.652914   59645 cri.go:89] found id: ""
	I0725 18:54:13.652924   59645 logs.go:276] 1 containers: [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1]
	I0725 18:54:13.652982   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.656970   59645 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:13.657031   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:13.690769   59645 cri.go:89] found id: "b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:13.690792   59645 cri.go:89] found id: ""
	I0725 18:54:13.690802   59645 logs.go:276] 1 containers: [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f]
	I0725 18:54:13.690861   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.694630   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:13.694692   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:13.732306   59645 cri.go:89] found id: "0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:13.732346   59645 cri.go:89] found id: ""
	I0725 18:54:13.732356   59645 logs.go:276] 1 containers: [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd]
	I0725 18:54:13.732413   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.736242   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:13.736311   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:13.771516   59645 cri.go:89] found id: "ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:13.771543   59645 cri.go:89] found id: ""
	I0725 18:54:13.771552   59645 logs.go:276] 1 containers: [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b]
	I0725 18:54:13.771610   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.775592   59645 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:13.775654   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:13.812821   59645 cri.go:89] found id: "de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:13.812847   59645 cri.go:89] found id: ""
	I0725 18:54:13.812857   59645 logs.go:276] 1 containers: [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1]
	I0725 18:54:13.812911   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.817039   59645 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:13.817097   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:13.856529   59645 cri.go:89] found id: ""
	I0725 18:54:13.856560   59645 logs.go:276] 0 containers: []
	W0725 18:54:13.856577   59645 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:13.856584   59645 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:13.856647   59645 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:13.889734   59645 cri.go:89] found id: "d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:13.889760   59645 cri.go:89] found id: "070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:13.889766   59645 cri.go:89] found id: ""
	I0725 18:54:13.889774   59645 logs.go:276] 2 containers: [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6]
	I0725 18:54:13.889831   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.893730   59645 ssh_runner.go:195] Run: which crictl
	I0725 18:54:13.897171   59645 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:13.897188   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:13.568262   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:54:13.568407   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:54:13.568493   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:54:13.568599   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:54:13.568677   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:54:13.568771   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:54:13.568844   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:54:13.569095   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:54:13.570081   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:54:13.570719   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:54:13.571213   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:54:13.571395   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:54:13.571482   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:54:13.900234   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:54:14.171283   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:54:14.317774   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:54:14.522412   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:54:14.537598   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:54:14.539553   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:54:14.539629   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:54:14.683755   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:54:12.600280   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.601203   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:11.648941   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.148132   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:14.685635   60176 out.go:204]   - Booting up control plane ...
	I0725 18:54:14.685764   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:54:14.697124   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:54:14.698087   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:54:14.698830   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:54:14.701051   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:54:14.314664   59645 logs.go:123] Gathering logs for container status ...
	I0725 18:54:14.314702   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:14.359956   59645 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:14.359991   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:14.429456   59645 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:14.429491   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:14.551238   59645 logs.go:123] Gathering logs for coredns [b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f] ...
	I0725 18:54:14.551279   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b64c5166c6547a79a7c3ebce909e4ce9360d227e3747605327f443d9212b156f"
	I0725 18:54:14.598045   59645 logs.go:123] Gathering logs for storage-provisioner [d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008] ...
	I0725 18:54:14.598082   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2387f4d44d2e3f169bcf0ca29a0d61f836258b687fe24282cbca1daad186008"
	I0725 18:54:14.633668   59645 logs.go:123] Gathering logs for storage-provisioner [070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6] ...
	I0725 18:54:14.633700   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 070dd1b58b01afb1aa11e61864b72767b54f54f1c7f650b26f88fc51bd8a4aa6"
	I0725 18:54:14.668871   59645 logs.go:123] Gathering logs for kube-controller-manager [de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1] ...
	I0725 18:54:14.668897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de5e9269d9497f6c89eef9c21e48fda2d7b21bae5459ba41db900c2cc9ebbeb1"
	I0725 18:54:14.732575   59645 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:14.732644   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:14.748852   59645 logs.go:123] Gathering logs for kube-apiserver [b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118] ...
	I0725 18:54:14.748897   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6b7ff25c3f043c712918d41c0af260aa0540b5eacd8cdac05162d758e5a7118"
	I0725 18:54:14.794021   59645 logs.go:123] Gathering logs for etcd [45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1] ...
	I0725 18:54:14.794058   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45aafe613d91fe604acd554f81e55ff6221de586fc907130d0df18b1104face1"
	I0725 18:54:14.836447   59645 logs.go:123] Gathering logs for kube-scheduler [0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd] ...
	I0725 18:54:14.836481   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c03165e87eac1b65fc83098816a75e43aa78533afec9880b3403da68d05b7dd"
	I0725 18:54:14.870813   59645 logs.go:123] Gathering logs for kube-proxy [ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b] ...
	I0725 18:54:14.870852   59645 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef20f38592f5cb8dcafd93e43839820ec958f5fd8aad3d161275248795198f2b"
	I0725 18:54:17.414647   59645 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:17.414678   59645 system_pods.go:61] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.414683   59645 system_pods.go:61] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.414687   59645 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.414691   59645 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.414694   59645 system_pods.go:61] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.414699   59645 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.414704   59645 system_pods.go:61] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.414710   59645 system_pods.go:61] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.414718   59645 system_pods.go:74] duration metric: took 3.85524656s to wait for pod list to return data ...
	I0725 18:54:17.414726   59645 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:17.417047   59645 default_sa.go:45] found service account: "default"
	I0725 18:54:17.417067   59645 default_sa.go:55] duration metric: took 2.333088ms for default service account to be created ...
	I0725 18:54:17.417074   59645 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:17.422890   59645 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:17.422915   59645 system_pods.go:89] "coredns-7db6d8ff4d-mfjzs" [452e9a58-6c09-4b38-8a0d-40f7b2b013d6] Running
	I0725 18:54:17.422920   59645 system_pods.go:89] "etcd-default-k8s-diff-port-600433" [dc54cea6-545b-47fd-97c0-db3432382b4f] Running
	I0725 18:54:17.422925   59645 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-600433" [0eb079d9-dd3b-4258-8d1c-fa33bb1a1f6a] Running
	I0725 18:54:17.422929   59645 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-600433" [8c994479-2a9d-4840-99e3-75f0f4f92d56] Running
	I0725 18:54:17.422933   59645 system_pods.go:89] "kube-proxy-smhmv" [a6cc9bb4-572b-4d0e-92a6-47fc71501ade] Running
	I0725 18:54:17.422936   59645 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-600433" [6cb2c1dc-e42a-42ab-baf5-35a68b0fb745] Running
	I0725 18:54:17.422942   59645 system_pods.go:89] "metrics-server-569cc877fc-5js8s" [1c72ac7a-9a56-4056-80bf-398eeab90b94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:17.422947   59645 system_pods.go:89] "storage-provisioner" [bca448b2-d88d-4978-891c-947f057a331d] Running
	I0725 18:54:17.422953   59645 system_pods.go:126] duration metric: took 5.874194ms to wait for k8s-apps to be running ...
	I0725 18:54:17.422958   59645 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:17.422998   59645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:17.438463   59645 system_svc.go:56] duration metric: took 15.497014ms WaitForService to wait for kubelet
	I0725 18:54:17.438490   59645 kubeadm.go:582] duration metric: took 4m25.922705533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:17.438511   59645 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:17.441632   59645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:17.441653   59645 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:17.441671   59645 node_conditions.go:105] duration metric: took 3.155244ms to run NodePressure ...
	I0725 18:54:17.441682   59645 start.go:241] waiting for startup goroutines ...
	I0725 18:54:17.441688   59645 start.go:246] waiting for cluster config update ...
	I0725 18:54:17.441698   59645 start.go:255] writing updated cluster config ...
	I0725 18:54:17.441957   59645 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:17.491791   59645 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:17.493992   59645 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-600433" cluster and "default" namespace by default
	I0725 18:54:16.601481   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:19.100120   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:16.646970   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:18.647757   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:20.650382   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:21.599857   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:24.099007   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:23.147215   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:25.148069   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:26.599428   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:28.600159   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:30.601469   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:27.150076   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:29.647741   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:33.100850   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:35.600080   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:31.648293   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:34.147584   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:36.147883   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.099662   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.601691   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:38.148559   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:40.648470   60732 pod_ready.go:102] pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:43.099948   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:45.599146   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:41.647969   60732 pod_ready.go:81] duration metric: took 4m0.006188545s for pod "metrics-server-569cc877fc-4gcts" in "kube-system" namespace to be "Ready" ...
	E0725 18:54:41.647993   60732 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:54:41.647999   60732 pod_ready.go:38] duration metric: took 4m4.549463734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:54:41.648014   60732 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:54:41.648042   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:41.648093   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:41.701960   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:41.701990   60732 cri.go:89] found id: ""
	I0725 18:54:41.702000   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:41.702060   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.706683   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:41.706775   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:41.741997   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:41.742019   60732 cri.go:89] found id: ""
	I0725 18:54:41.742027   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:41.742070   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.745965   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:41.746019   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:41.787104   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:41.787127   60732 cri.go:89] found id: ""
	I0725 18:54:41.787137   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:41.787189   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.791375   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:41.791441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:41.836394   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:41.836417   60732 cri.go:89] found id: ""
	I0725 18:54:41.836425   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:41.836472   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.840775   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:41.840830   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:41.877307   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:41.877328   60732 cri.go:89] found id: ""
	I0725 18:54:41.877338   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:41.877384   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.881221   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:41.881289   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:41.918540   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:41.918569   60732 cri.go:89] found id: ""
	I0725 18:54:41.918579   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:41.918639   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:41.922866   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:41.922975   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:41.957335   60732 cri.go:89] found id: ""
	I0725 18:54:41.957361   60732 logs.go:276] 0 containers: []
	W0725 18:54:41.957371   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:41.957377   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:41.957441   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:41.998241   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:41.998269   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:41.998274   60732 cri.go:89] found id: ""
	I0725 18:54:41.998283   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:41.998333   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.002872   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:42.006541   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:42.006571   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:42.039456   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:42.039484   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:42.535367   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:42.535412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:42.592118   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:42.592165   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:42.606753   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:42.606784   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:42.656287   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:42.656337   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:42.696439   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:42.696470   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:42.752874   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:42.752913   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:42.786513   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:42.786540   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:42.914470   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:42.914506   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:42.951371   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:42.951399   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:42.989249   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:42.989278   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:43.030911   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:43.030945   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:45.581560   60732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:54:45.599532   60732 api_server.go:72] duration metric: took 4m15.71630146s to wait for apiserver process to appear ...
	I0725 18:54:45.599559   60732 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:54:45.599602   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:45.599669   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:45.643222   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:45.643245   60732 cri.go:89] found id: ""
	I0725 18:54:45.643251   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:45.643293   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.647594   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:45.647646   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:45.685817   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:45.685843   60732 cri.go:89] found id: ""
	I0725 18:54:45.685851   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:45.685908   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.689698   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:45.689746   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:45.723068   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:45.723086   60732 cri.go:89] found id: ""
	I0725 18:54:45.723093   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:45.723139   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.727312   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:45.727373   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:45.764668   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.764691   60732 cri.go:89] found id: ""
	I0725 18:54:45.764698   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:45.764746   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.768763   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:45.768821   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:45.804140   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.804162   60732 cri.go:89] found id: ""
	I0725 18:54:45.804171   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:45.804229   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.807907   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:45.807962   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:45.845435   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:45.845458   60732 cri.go:89] found id: ""
	I0725 18:54:45.845465   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:45.845516   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.849429   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:45.849488   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:45.882663   60732 cri.go:89] found id: ""
	I0725 18:54:45.882696   60732 logs.go:276] 0 containers: []
	W0725 18:54:45.882706   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:45.882713   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:45.882779   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:45.916947   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:45.916975   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:45.916988   60732 cri.go:89] found id: ""
	I0725 18:54:45.916995   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:45.917039   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.921470   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:45.925153   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:45.925175   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:45.959693   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:45.959722   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:45.998162   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:45.998188   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:47.599790   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:49.605818   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:46.424235   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:46.424271   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:46.465439   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:46.465468   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:46.516900   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:46.516931   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:46.629700   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:46.629777   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:46.673233   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:46.673264   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:46.706641   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:46.706680   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:46.741970   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:46.742002   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:46.755337   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:46.755364   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:46.805564   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:46.805594   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:46.856226   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:46.856257   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.398852   60732 api_server.go:253] Checking apiserver healthz at https://192.168.61.133:8443/healthz ...
	I0725 18:54:49.403222   60732 api_server.go:279] https://192.168.61.133:8443/healthz returned 200:
	ok
	I0725 18:54:49.404180   60732 api_server.go:141] control plane version: v1.30.3
	I0725 18:54:49.404199   60732 api_server.go:131] duration metric: took 3.804634202s to wait for apiserver health ...
	I0725 18:54:49.404206   60732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:54:49.404227   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:54:49.404269   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:54:49.439543   60732 cri.go:89] found id: "e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:49.439561   60732 cri.go:89] found id: ""
	I0725 18:54:49.439568   60732 logs.go:276] 1 containers: [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c]
	I0725 18:54:49.439625   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.444958   60732 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:54:49.445028   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:54:49.482934   60732 cri.go:89] found id: "c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:49.482959   60732 cri.go:89] found id: ""
	I0725 18:54:49.482969   60732 logs.go:276] 1 containers: [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4]
	I0725 18:54:49.483026   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.486982   60732 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:54:49.487057   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:54:49.526379   60732 cri.go:89] found id: "e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.526405   60732 cri.go:89] found id: ""
	I0725 18:54:49.526415   60732 logs.go:276] 1 containers: [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80]
	I0725 18:54:49.526481   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.531314   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:54:49.531401   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:54:49.565687   60732 cri.go:89] found id: "980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.565716   60732 cri.go:89] found id: ""
	I0725 18:54:49.565724   60732 logs.go:276] 1 containers: [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3]
	I0725 18:54:49.565772   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.569706   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:54:49.569778   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:54:49.606900   60732 cri.go:89] found id: "3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.606923   60732 cri.go:89] found id: ""
	I0725 18:54:49.606932   60732 logs.go:276] 1 containers: [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb]
	I0725 18:54:49.606986   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.611079   60732 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:54:49.611155   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:54:49.645077   60732 cri.go:89] found id: "a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.645099   60732 cri.go:89] found id: ""
	I0725 18:54:49.645107   60732 logs.go:276] 1 containers: [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef]
	I0725 18:54:49.645165   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.648932   60732 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:54:49.648984   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:54:49.685181   60732 cri.go:89] found id: ""
	I0725 18:54:49.685209   60732 logs.go:276] 0 containers: []
	W0725 18:54:49.685220   60732 logs.go:278] No container was found matching "kindnet"
	I0725 18:54:49.685228   60732 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:54:49.685290   60732 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:54:49.718825   60732 cri.go:89] found id: "fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.718852   60732 cri.go:89] found id: "e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:49.718858   60732 cri.go:89] found id: ""
	I0725 18:54:49.718866   60732 logs.go:276] 2 containers: [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354]
	I0725 18:54:49.718927   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.723182   60732 ssh_runner.go:195] Run: which crictl
	I0725 18:54:49.726590   60732 logs.go:123] Gathering logs for coredns [e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80] ...
	I0725 18:54:49.726611   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e265ce86dc50dde1060584c0e7befcda8619f550c47dba3658413b5f8db68a80"
	I0725 18:54:49.760011   60732 logs.go:123] Gathering logs for kube-controller-manager [a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef] ...
	I0725 18:54:49.760038   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a057db9df5d793b4153275409d87aaa3e7fbb8ccb2eecf0415fc42420aed2fef"
	I0725 18:54:49.816552   60732 logs.go:123] Gathering logs for kube-scheduler [980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3] ...
	I0725 18:54:49.816593   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 980f1cafbf9dfc9267889fd6a3ff4c0a23825c93e85ccd02b414f9364f2e4fa3"
	I0725 18:54:49.852003   60732 logs.go:123] Gathering logs for kube-proxy [3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb] ...
	I0725 18:54:49.852034   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3396bd8e6a955b1a3bb5ca2bd132a19ffea199dfed997c48a32523fb661980cb"
	I0725 18:54:49.887907   60732 logs.go:123] Gathering logs for storage-provisioner [fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5] ...
	I0725 18:54:49.887937   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd45387197a7106490e5754143528b7a4d43a2404ff21712cf8aa7a50cda9ef5"
	I0725 18:54:49.920728   60732 logs.go:123] Gathering logs for kubelet ...
	I0725 18:54:49.920763   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:54:49.972145   60732 logs.go:123] Gathering logs for dmesg ...
	I0725 18:54:49.972177   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:54:49.986365   60732 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:54:49.986391   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:54:50.088100   60732 logs.go:123] Gathering logs for kube-apiserver [e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c] ...
	I0725 18:54:50.088141   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e29758ae5e8576f73721306a8432d2c3af5b934e01d80a3b8f10db4519b1621c"
	I0725 18:54:50.137382   60732 logs.go:123] Gathering logs for etcd [c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4] ...
	I0725 18:54:50.137412   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e8d2e70adcf6c24e977455e8e60949941b111f5c563f078f08a06ae01c04f4"
	I0725 18:54:50.181636   60732 logs.go:123] Gathering logs for storage-provisioner [e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354] ...
	I0725 18:54:50.181668   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e75aba803f380783c42252e8d67dcc5e19753994a397e05845a00279e7db6354"
	I0725 18:54:50.217427   60732 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:54:50.217452   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:54:50.575378   60732 logs.go:123] Gathering logs for container status ...
	I0725 18:54:50.575421   60732 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:54:53.125288   60732 system_pods.go:59] 8 kube-system pods found
	I0725 18:54:53.125322   60732 system_pods.go:61] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.125327   60732 system_pods.go:61] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.125331   60732 system_pods.go:61] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.125335   60732 system_pods.go:61] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.125338   60732 system_pods.go:61] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.125341   60732 system_pods.go:61] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.125347   60732 system_pods.go:61] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.125352   60732 system_pods.go:61] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.125358   60732 system_pods.go:74] duration metric: took 3.721147072s to wait for pod list to return data ...
	I0725 18:54:53.125365   60732 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:54:53.127677   60732 default_sa.go:45] found service account: "default"
	I0725 18:54:53.127695   60732 default_sa.go:55] duration metric: took 2.325927ms for default service account to be created ...
	I0725 18:54:53.127702   60732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:54:53.134656   60732 system_pods.go:86] 8 kube-system pods found
	I0725 18:54:53.134682   60732 system_pods.go:89] "coredns-7db6d8ff4d-89vvx" [af4ee327-2b83-4102-aeac-9f2285355345] Running
	I0725 18:54:53.134690   60732 system_pods.go:89] "etcd-embed-certs-646344" [0600d338-30a1-4565-8e30-de2b6469320a] Running
	I0725 18:54:53.134697   60732 system_pods.go:89] "kube-apiserver-embed-certs-646344" [b52524e3-ab98-4771-b3b3-45de07c1c000] Running
	I0725 18:54:53.134707   60732 system_pods.go:89] "kube-controller-manager-embed-certs-646344" [c99acf9e-ee0e-42a5-b5ea-e80e4bd8aecf] Running
	I0725 18:54:53.134713   60732 system_pods.go:89] "kube-proxy-xk2lq" [2d74b42c-16cd-4714-803b-129e1d2ec722] Running
	I0725 18:54:53.134719   60732 system_pods.go:89] "kube-scheduler-embed-certs-646344" [fa5792cf-6666-47b2-8556-d563f948e722] Running
	I0725 18:54:53.134729   60732 system_pods.go:89] "metrics-server-569cc877fc-4gcts" [688239e2-95b8-4344-b3e5-5199f9504a19] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:54:53.134738   60732 system_pods.go:89] "storage-provisioner" [3d10d635-9457-42c3-9183-abc4a7205c48] Running
	I0725 18:54:53.134745   60732 system_pods.go:126] duration metric: took 7.037359ms to wait for k8s-apps to be running ...
	I0725 18:54:53.134756   60732 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:54:53.134804   60732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:54:53.152898   60732 system_svc.go:56] duration metric: took 18.132464ms WaitForService to wait for kubelet
	I0725 18:54:53.152939   60732 kubeadm.go:582] duration metric: took 4m23.26971097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:54:53.152966   60732 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:54:53.155626   60732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:54:53.155645   60732 node_conditions.go:123] node cpu capacity is 2
	I0725 18:54:53.155654   60732 node_conditions.go:105] duration metric: took 2.684085ms to run NodePressure ...
	I0725 18:54:53.155664   60732 start.go:241] waiting for startup goroutines ...
	I0725 18:54:53.155670   60732 start.go:246] waiting for cluster config update ...
	I0725 18:54:53.155680   60732 start.go:255] writing updated cluster config ...
	I0725 18:54:53.155922   60732 ssh_runner.go:195] Run: rm -f paused
	I0725 18:54:53.202323   60732 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:54:53.204492   60732 out.go:177] * Done! kubectl is now configured to use "embed-certs-646344" cluster and "default" namespace by default
	I0725 18:54:52.099812   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.599046   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:54.702358   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:54:54.702929   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:54.703166   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:54:56.600641   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:58.600997   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:54:59.703734   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:54:59.704045   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:01.099681   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:03.099863   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:05.099936   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:07.600199   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:09.600587   59378 pod_ready.go:102] pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace has status "Ready":"False"
	I0725 18:55:10.600594   59378 pod_ready.go:81] duration metric: took 4m0.007321371s for pod "metrics-server-78fcd8795b-zthnk" in "kube-system" namespace to be "Ready" ...
	E0725 18:55:10.600617   59378 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 18:55:10.600625   59378 pod_ready.go:38] duration metric: took 4m5.545225617s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:55:10.600637   59378 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:55:10.600660   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:10.600701   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:10.652016   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:10.652040   59378 cri.go:89] found id: ""
	I0725 18:55:10.652047   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:10.652099   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.656405   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:10.656471   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:10.695672   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:10.695697   59378 cri.go:89] found id: ""
	I0725 18:55:10.695706   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:10.695768   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.700362   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:10.700424   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:10.736685   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.736702   59378 cri.go:89] found id: ""
	I0725 18:55:10.736709   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:10.736755   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.740626   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:10.740686   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:10.786452   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:10.786470   59378 cri.go:89] found id: ""
	I0725 18:55:10.786478   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:10.786533   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.790873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:10.790938   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:10.826203   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:10.826238   59378 cri.go:89] found id: ""
	I0725 18:55:10.826247   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:10.826311   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.830241   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:10.830418   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:10.865432   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:10.865460   59378 cri.go:89] found id: ""
	I0725 18:55:10.865470   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:10.865527   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.869415   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:10.869469   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:10.904230   59378 cri.go:89] found id: ""
	I0725 18:55:10.904254   59378 logs.go:276] 0 containers: []
	W0725 18:55:10.904262   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:10.904267   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:10.904339   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:10.938539   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:10.938558   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:10.938563   59378 cri.go:89] found id: ""
	I0725 18:55:10.938571   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:10.938623   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:09.704361   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:09.704593   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:55:10.942419   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:10.946266   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:10.946293   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:10.984335   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:10.984365   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:11.021733   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:11.021762   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:11.059218   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:11.059248   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:11.110886   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:11.110919   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:11.147381   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:11.147412   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:11.644012   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:11.644052   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:11.699290   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:11.699324   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:11.750317   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:11.750350   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:11.801340   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:11.801370   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:11.835746   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:11.835773   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:11.875309   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:11.875340   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:11.888262   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:11.888286   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:14.516169   59378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:55:14.533223   59378 api_server.go:72] duration metric: took 4m17.191676299s to wait for apiserver process to appear ...
	I0725 18:55:14.533248   59378 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:55:14.533283   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:14.533328   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:14.568170   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:14.568188   59378 cri.go:89] found id: ""
	I0725 18:55:14.568195   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:14.568237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.572638   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:14.572704   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:14.605953   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:14.605976   59378 cri.go:89] found id: ""
	I0725 18:55:14.605983   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:14.606029   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.609849   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:14.609912   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:14.650049   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.650068   59378 cri.go:89] found id: ""
	I0725 18:55:14.650075   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:14.650117   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.653905   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:14.653966   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:14.697059   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:14.697078   59378 cri.go:89] found id: ""
	I0725 18:55:14.697086   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:14.697145   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.701179   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:14.701245   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:14.741482   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:14.741499   59378 cri.go:89] found id: ""
	I0725 18:55:14.741507   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:14.741554   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.745355   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:14.745410   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:14.784058   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.784077   59378 cri.go:89] found id: ""
	I0725 18:55:14.784086   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:14.784146   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.788254   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:14.788354   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:14.823286   59378 cri.go:89] found id: ""
	I0725 18:55:14.823309   59378 logs.go:276] 0 containers: []
	W0725 18:55:14.823317   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:14.823322   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:14.823369   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:14.860591   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.860625   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:14.860631   59378 cri.go:89] found id: ""
	I0725 18:55:14.860639   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:14.860693   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.864444   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:14.868015   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:14.868034   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:14.902336   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:14.902361   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:14.951281   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:14.951312   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:14.987810   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:14.987836   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:15.031264   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:15.031303   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:15.082950   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:15.082981   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:15.097240   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:15.097264   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:15.195392   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:15.195422   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:15.238978   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:15.239015   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:15.278551   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:15.278586   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:15.318486   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:15.318517   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:15.354217   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:15.354245   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:15.391511   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:15.391536   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:18.296420   59378 api_server.go:253] Checking apiserver healthz at https://192.168.72.62:8443/healthz ...
	I0725 18:55:18.301704   59378 api_server.go:279] https://192.168.72.62:8443/healthz returned 200:
	ok
	I0725 18:55:18.303040   59378 api_server.go:141] control plane version: v1.31.0-beta.0
	I0725 18:55:18.303059   59378 api_server.go:131] duration metric: took 3.769804671s to wait for apiserver health ...
	I0725 18:55:18.303067   59378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:55:18.303097   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:55:18.303148   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:55:18.340192   59378 cri.go:89] found id: "86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:18.340210   59378 cri.go:89] found id: ""
	I0725 18:55:18.340217   59378 logs.go:276] 1 containers: [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c]
	I0725 18:55:18.340262   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.343882   59378 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:55:18.343936   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:55:18.381885   59378 cri.go:89] found id: "5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:18.381912   59378 cri.go:89] found id: ""
	I0725 18:55:18.381922   59378 logs.go:276] 1 containers: [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc]
	I0725 18:55:18.381979   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.385682   59378 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:55:18.385749   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:55:18.420162   59378 cri.go:89] found id: "143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:18.420183   59378 cri.go:89] found id: ""
	I0725 18:55:18.420190   59378 logs.go:276] 1 containers: [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956]
	I0725 18:55:18.420237   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.424103   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:55:18.424153   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:55:18.462946   59378 cri.go:89] found id: "e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:18.462987   59378 cri.go:89] found id: ""
	I0725 18:55:18.462998   59378 logs.go:276] 1 containers: [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3]
	I0725 18:55:18.463055   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.467228   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:55:18.467278   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:55:18.510007   59378 cri.go:89] found id: "6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:18.510036   59378 cri.go:89] found id: ""
	I0725 18:55:18.510046   59378 logs.go:276] 1 containers: [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9]
	I0725 18:55:18.510103   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.513873   59378 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:55:18.513937   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:55:18.551230   59378 cri.go:89] found id: "f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:18.551255   59378 cri.go:89] found id: ""
	I0725 18:55:18.551264   59378 logs.go:276] 1 containers: [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89]
	I0725 18:55:18.551322   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.555764   59378 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:55:18.555833   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:55:18.593584   59378 cri.go:89] found id: ""
	I0725 18:55:18.593615   59378 logs.go:276] 0 containers: []
	W0725 18:55:18.593626   59378 logs.go:278] No container was found matching "kindnet"
	I0725 18:55:18.593633   59378 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0725 18:55:18.593690   59378 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 18:55:18.631912   59378 cri.go:89] found id: "dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.631938   59378 cri.go:89] found id: "e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.631944   59378 cri.go:89] found id: ""
	I0725 18:55:18.631952   59378 logs.go:276] 2 containers: [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c]
	I0725 18:55:18.632036   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.635895   59378 ssh_runner.go:195] Run: which crictl
	I0725 18:55:18.639457   59378 logs.go:123] Gathering logs for storage-provisioner [dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b] ...
	I0725 18:55:18.639481   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcdeb74e654678713f19e2c7a54a63d910d5dba9f4f480ae67c7cc73dd8e7a0b"
	I0725 18:55:18.677563   59378 logs.go:123] Gathering logs for storage-provisioner [e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c] ...
	I0725 18:55:18.677595   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99e6f0bcc37cdd83c5797ed940b235e9b65d093643fed3477d0e7478c4b048c"
	I0725 18:55:18.716298   59378 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:55:18.716353   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:55:19.104236   59378 logs.go:123] Gathering logs for container status ...
	I0725 18:55:19.104281   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:55:19.157931   59378 logs.go:123] Gathering logs for kubelet ...
	I0725 18:55:19.157965   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:55:19.214479   59378 logs.go:123] Gathering logs for kube-apiserver [86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c] ...
	I0725 18:55:19.214510   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a55c3ce8aca3d3e010ea9ff6815f9b7ad2b3c6826d1d4b70dda1fc72bfb93c"
	I0725 18:55:19.265860   59378 logs.go:123] Gathering logs for kube-proxy [6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9] ...
	I0725 18:55:19.265887   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9d65c9517298d52221d756e069bcd5a5ae500b94e2845d12617880b93a7ab9"
	I0725 18:55:19.306476   59378 logs.go:123] Gathering logs for coredns [143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956] ...
	I0725 18:55:19.306501   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 143f91ca28541dec6e89f885f488aed85fc75cfa9962a3fdd58c729178403956"
	I0725 18:55:19.340758   59378 logs.go:123] Gathering logs for kube-scheduler [e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3] ...
	I0725 18:55:19.340783   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8502ebc3bc8f837f82c872037efdae18f40673bb7363a16bf023b13de4301f3"
	I0725 18:55:19.380798   59378 logs.go:123] Gathering logs for kube-controller-manager [f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89] ...
	I0725 18:55:19.380824   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f55693d23f9762803aadb9e0f1e32cf26d78a998b73174780620316015c84b89"
	I0725 18:55:19.439585   59378 logs.go:123] Gathering logs for dmesg ...
	I0725 18:55:19.439619   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 18:55:19.454117   59378 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:55:19.454145   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 18:55:19.558944   59378 logs.go:123] Gathering logs for etcd [5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc] ...
	I0725 18:55:19.558972   59378 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b4489bee34a426fce650cca563f6285dbec03aa48fa9644e2e7b99e611af9fc"
	I0725 18:55:22.114733   59378 system_pods.go:59] 8 kube-system pods found
	I0725 18:55:22.114766   59378 system_pods.go:61] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.114773   59378 system_pods.go:61] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.114778   59378 system_pods.go:61] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.114783   59378 system_pods.go:61] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.114788   59378 system_pods.go:61] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.114792   59378 system_pods.go:61] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.114800   59378 system_pods.go:61] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.114806   59378 system_pods.go:61] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.114815   59378 system_pods.go:74] duration metric: took 3.811742621s to wait for pod list to return data ...
	I0725 18:55:22.114827   59378 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:55:22.118211   59378 default_sa.go:45] found service account: "default"
	I0725 18:55:22.118237   59378 default_sa.go:55] duration metric: took 3.400507ms for default service account to be created ...
	I0725 18:55:22.118245   59378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:55:22.123350   59378 system_pods.go:86] 8 kube-system pods found
	I0725 18:55:22.123375   59378 system_pods.go:89] "coredns-5cfdc65f69-lq97z" [035503b5-7acf-4f42-a057-b3346c9b9704] Running
	I0725 18:55:22.123380   59378 system_pods.go:89] "etcd-no-preload-371663" [b5cbee2d-fd75-4408-a80c-ee5b085565ed] Running
	I0725 18:55:22.123384   59378 system_pods.go:89] "kube-apiserver-no-preload-371663" [53700016-48b5-4309-a0d1-61d5357ab1a3] Running
	I0725 18:55:22.123390   59378 system_pods.go:89] "kube-controller-manager-no-preload-371663" [cd24da2c-fbb9-4ff4-984b-b4793885305f] Running
	I0725 18:55:22.123394   59378 system_pods.go:89] "kube-proxy-bf9rt" [65cbe378-8c6b-4034-9882-fc55c4eeca38] Running
	I0725 18:55:22.123398   59378 system_pods.go:89] "kube-scheduler-no-preload-371663" [d5a69ef8-919b-4ca0-9f1b-523a7d3c6c13] Running
	I0725 18:55:22.123405   59378 system_pods.go:89] "metrics-server-78fcd8795b-zthnk" [1cd7a284-6dd0-4052-966f-617028833a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:55:22.123410   59378 system_pods.go:89] "storage-provisioner" [fcd1c25d-32bd-4190-9e85-9629d6ea8bd0] Running
	I0725 18:55:22.123417   59378 system_pods.go:126] duration metric: took 5.166628ms to wait for k8s-apps to be running ...
	I0725 18:55:22.123424   59378 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:55:22.123467   59378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:55:22.139784   59378 system_svc.go:56] duration metric: took 16.349883ms WaitForService to wait for kubelet
	I0725 18:55:22.139808   59378 kubeadm.go:582] duration metric: took 4m24.798265923s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:55:22.139825   59378 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:55:22.143958   59378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0725 18:55:22.143981   59378 node_conditions.go:123] node cpu capacity is 2
	I0725 18:55:22.143992   59378 node_conditions.go:105] duration metric: took 4.161089ms to run NodePressure ...
	I0725 18:55:22.144006   59378 start.go:241] waiting for startup goroutines ...
	I0725 18:55:22.144015   59378 start.go:246] waiting for cluster config update ...
	I0725 18:55:22.144026   59378 start.go:255] writing updated cluster config ...
	I0725 18:55:22.144382   59378 ssh_runner.go:195] Run: rm -f paused
	I0725 18:55:22.192893   59378 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0725 18:55:22.195796   59378 out.go:177] * Done! kubectl is now configured to use "no-preload-371663" cluster and "default" namespace by default
	I0725 18:55:29.705545   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:55:29.705871   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.707936   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:56:09.708279   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:56:09.708303   60176 kubeadm.go:310] 
	I0725 18:56:09.708361   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:56:09.708425   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:56:09.708434   60176 kubeadm.go:310] 
	I0725 18:56:09.708495   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:56:09.708548   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:56:09.708721   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:56:09.708755   60176 kubeadm.go:310] 
	I0725 18:56:09.708910   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:56:09.708960   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:56:09.708997   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:56:09.709006   60176 kubeadm.go:310] 
	I0725 18:56:09.709130   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:56:09.709230   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:56:09.709239   60176 kubeadm.go:310] 
	I0725 18:56:09.709366   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:56:09.709499   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:56:09.709608   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:56:09.709715   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:56:09.709730   60176 kubeadm.go:310] 
	I0725 18:56:09.710446   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:56:09.710594   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:56:09.710699   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0725 18:56:09.710838   60176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0725 18:56:09.710897   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0725 18:56:15.078699   60176 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.367772874s)
	I0725 18:56:15.078772   60176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:56:15.093265   60176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:56:15.102513   60176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:56:15.102529   60176 kubeadm.go:157] found existing configuration files:
	
	I0725 18:56:15.102570   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:56:15.111001   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:56:15.111059   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:56:15.119773   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:56:15.128109   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:56:15.128166   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:56:15.136753   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.145122   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:56:15.145179   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:56:15.153952   60176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:56:15.162067   60176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:56:15.162109   60176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:56:15.170779   60176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 18:56:15.382925   60176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:58:11.387751   60176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0725 18:58:11.387868   60176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0725 18:58:11.389848   60176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0725 18:58:11.389935   60176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:58:11.390076   60176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:58:11.390177   60176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:58:11.390289   60176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:58:11.390389   60176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:58:11.392281   60176 out.go:204]   - Generating certificates and keys ...
	I0725 18:58:11.392400   60176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:58:11.392487   60176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:58:11.392609   60176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 18:58:11.392698   60176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 18:58:11.392808   60176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 18:58:11.392893   60176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 18:58:11.392960   60176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 18:58:11.393054   60176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 18:58:11.393160   60176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 18:58:11.393260   60176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 18:58:11.393311   60176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 18:58:11.393362   60176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:58:11.393415   60176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:58:11.393470   60176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:58:11.393522   60176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:58:11.393573   60176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:58:11.393665   60176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:58:11.393760   60176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:58:11.393815   60176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:58:11.393888   60176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:58:11.395197   60176 out.go:204]   - Booting up control plane ...
	I0725 18:58:11.395292   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:58:11.395385   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:58:11.395454   60176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:58:11.395528   60176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:58:11.395674   60176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 18:58:11.395717   60176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0725 18:58:11.395793   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396019   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396116   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396334   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396408   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396572   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396638   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.396799   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.396865   60176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0725 18:58:11.397061   60176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0725 18:58:11.397069   60176 kubeadm.go:310] 
	I0725 18:58:11.397102   60176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0725 18:58:11.397136   60176 kubeadm.go:310] 		timed out waiting for the condition
	I0725 18:58:11.397141   60176 kubeadm.go:310] 
	I0725 18:58:11.397169   60176 kubeadm.go:310] 	This error is likely caused by:
	I0725 18:58:11.397212   60176 kubeadm.go:310] 		- The kubelet is not running
	I0725 18:58:11.397314   60176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0725 18:58:11.397338   60176 kubeadm.go:310] 
	I0725 18:58:11.397462   60176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0725 18:58:11.397504   60176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0725 18:58:11.397554   60176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0725 18:58:11.397566   60176 kubeadm.go:310] 
	I0725 18:58:11.397657   60176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0725 18:58:11.397730   60176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0725 18:58:11.397737   60176 kubeadm.go:310] 
	I0725 18:58:11.397832   60176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0725 18:58:11.397928   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0725 18:58:11.398009   60176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0725 18:58:11.398088   60176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0725 18:58:11.398144   60176 kubeadm.go:310] 
	I0725 18:58:11.398184   60176 kubeadm.go:394] duration metric: took 8m7.195831536s to StartCluster
	I0725 18:58:11.398237   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0725 18:58:11.398431   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 18:58:11.438474   60176 cri.go:89] found id: ""
	I0725 18:58:11.438497   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.438504   60176 logs.go:278] No container was found matching "kube-apiserver"
	I0725 18:58:11.438509   60176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0725 18:58:11.438560   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 18:58:11.470965   60176 cri.go:89] found id: ""
	I0725 18:58:11.471000   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.471013   60176 logs.go:278] No container was found matching "etcd"
	I0725 18:58:11.471021   60176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0725 18:58:11.471086   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 18:58:11.503353   60176 cri.go:89] found id: ""
	I0725 18:58:11.503387   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.503402   60176 logs.go:278] No container was found matching "coredns"
	I0725 18:58:11.503409   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0725 18:58:11.503468   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 18:58:11.535307   60176 cri.go:89] found id: ""
	I0725 18:58:11.535340   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.535350   60176 logs.go:278] No container was found matching "kube-scheduler"
	I0725 18:58:11.535359   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0725 18:58:11.535425   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 18:58:11.568071   60176 cri.go:89] found id: ""
	I0725 18:58:11.568094   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.568104   60176 logs.go:278] No container was found matching "kube-proxy"
	I0725 18:58:11.568118   60176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 18:58:11.568183   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 18:58:11.600126   60176 cri.go:89] found id: ""
	I0725 18:58:11.600154   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.600165   60176 logs.go:278] No container was found matching "kube-controller-manager"
	I0725 18:58:11.600172   60176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0725 18:58:11.600234   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 18:58:11.632609   60176 cri.go:89] found id: ""
	I0725 18:58:11.632635   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.632642   60176 logs.go:278] No container was found matching "kindnet"
	I0725 18:58:11.632648   60176 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 18:58:11.632706   60176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 18:58:11.666352   60176 cri.go:89] found id: ""
	I0725 18:58:11.666376   60176 logs.go:276] 0 containers: []
	W0725 18:58:11.666384   60176 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0725 18:58:11.666392   60176 logs.go:123] Gathering logs for describe nodes ...
	I0725 18:58:11.666409   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0725 18:58:11.766887   60176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0725 18:58:11.766912   60176 logs.go:123] Gathering logs for CRI-O ...
	I0725 18:58:11.766930   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0725 18:58:11.885565   60176 logs.go:123] Gathering logs for container status ...
	I0725 18:58:11.885601   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 18:58:11.927611   60176 logs.go:123] Gathering logs for kubelet ...
	I0725 18:58:11.927637   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 18:58:11.978011   60176 logs.go:123] Gathering logs for dmesg ...
	I0725 18:58:11.978046   60176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0725 18:58:11.991296   60176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0725 18:58:11.991350   60176 out.go:239] * 
	W0725 18:58:11.991412   60176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.991433   60176 out.go:239] * 
	W0725 18:58:11.992535   60176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 18:58:11.996223   60176 out.go:177] 
	W0725 18:58:11.997418   60176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0725 18:58:11.997464   60176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0725 18:58:11.997495   60176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0725 18:58:11.998869   60176 out.go:177] 
	
	
	==> CRI-O <==
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.737039706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934549737003867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1762d7d-8d44-4e0f-a2f9-e76dd83d7b88 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.737646073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f91d3f9b-3d4e-4798-a623-84ad808fbda3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.737694147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f91d3f9b-3d4e-4798-a623-84ad808fbda3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.737750864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f91d3f9b-3d4e-4798-a623-84ad808fbda3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.768994609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81aa0323-8e7b-46a2-956a-520dbd42b8dd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.769152658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81aa0323-8e7b-46a2-956a-520dbd42b8dd name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.770412407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44475691-7c9f-4ea1-9c28-660a169813a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.770797350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934549770776860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44475691-7c9f-4ea1-9c28-660a169813a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.771347466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bd93aaa-0a1a-49d1-9823-0c491c223928 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.771398080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bd93aaa-0a1a-49d1-9823-0c491c223928 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.771443277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6bd93aaa-0a1a-49d1-9823-0c491c223928 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.804166953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=816f8dc7-6179-4f40-8be9-c4fff0a7f797 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.804260264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=816f8dc7-6179-4f40-8be9-c4fff0a7f797 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.805369640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50525b13-895c-4d40-b45b-0603c683b16e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.805922661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934549805890212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50525b13-895c-4d40-b45b-0603c683b16e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.806589271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=304e196a-59e3-4f85-812b-89c168cf00f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.806681976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=304e196a-59e3-4f85-812b-89c168cf00f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.806731237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=304e196a-59e3-4f85-812b-89c168cf00f1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.838062528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef6d520d-698c-432d-ab2c-e4662e438b23 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.838202462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef6d520d-698c-432d-ab2c-e4662e438b23 name=/runtime.v1.RuntimeService/Version
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.839581845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7ee7e13-f06b-4271-934f-42feddbfd4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.839986799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721934549839952405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7ee7e13-f06b-4271-934f-42feddbfd4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.840705697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3901d012-cc94-47df-bc89-61cadcc83919 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.840767379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3901d012-cc94-47df-bc89-61cadcc83919 name=/runtime.v1.RuntimeService/ListContainers
	Jul 25 19:09:09 old-k8s-version-108542 crio[648]: time="2024-07-25 19:09:09.840808060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3901d012-cc94-47df-bc89-61cadcc83919 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055343] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037717] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863537] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.917310] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.440772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.925882] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062742] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.199961] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129009] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.312354] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[Jul25 18:50] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +0.061607] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.085718] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[ +12.193987] kauditd_printk_skb: 46 callbacks suppressed
	[Jul25 18:54] systemd-fstab-generator[5076]: Ignoring "noauto" option for root device
	[Jul25 18:56] systemd-fstab-generator[5371]: Ignoring "noauto" option for root device
	[  +0.066840] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:09:10 up 19 min,  0 users,  load average: 0.20, 0.12, 0.09
	Linux old-k8s-version-108542 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000dacea0, 0x48ab5d6, 0x3, 0xc000d633b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000dacea0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000d633b0, 0x24, 0x0, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net.(*Dialer).DialContext(0xc000291c80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d633b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0000dfd20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d633b0, 0x24, 0x60, 0x7f3cbdafcb78, 0x118, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 25 19:09:06 old-k8s-version-108542 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net/http.(*Transport).dial(0xc000a5a000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000d633b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net/http.(*Transport).dialConn(0xc000a5a000, 0x4f7fe00, 0xc000120018, 0x0, 0xc0009b4480, 0x5, 0xc000d633b0, 0x24, 0x0, 0xc000db27e0, ...)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: net/http.(*Transport).dialConnFor(0xc000a5a000, 0xc00073d130)
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]: created by net/http.(*Transport).queueForDial
	Jul 25 19:09:06 old-k8s-version-108542 kubelet[6815]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 25 19:09:07 old-k8s-version-108542 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 134.
	Jul 25 19:09:07 old-k8s-version-108542 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 25 19:09:07 old-k8s-version-108542 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 25 19:09:07 old-k8s-version-108542 kubelet[6824]: I0725 19:09:07.263614    6824 server.go:416] Version: v1.20.0
	Jul 25 19:09:07 old-k8s-version-108542 kubelet[6824]: I0725 19:09:07.263878    6824 server.go:837] Client rotation is on, will bootstrap in background
	Jul 25 19:09:07 old-k8s-version-108542 kubelet[6824]: I0725 19:09:07.265937    6824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 25 19:09:07 old-k8s-version-108542 kubelet[6824]: W0725 19:09:07.267002    6824 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 25 19:09:07 old-k8s-version-108542 kubelet[6824]: I0725 19:09:07.267183    6824 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 2 (217.799085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-108542" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (112.54s)

                                                
                                    

Test pass (252/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.76
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 12.13
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.05
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 11.73
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 95.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 142.82
40 TestAddons/serial/GCPAuth/Namespaces 0.13
42 TestAddons/parallel/Registry 16.94
44 TestAddons/parallel/InspektorGadget 10.95
46 TestAddons/parallel/HelmTiller 10.79
48 TestAddons/parallel/CSI 68.87
49 TestAddons/parallel/Headlamp 19.56
50 TestAddons/parallel/CloudSpanner 5.69
51 TestAddons/parallel/LocalPath 55.04
52 TestAddons/parallel/NvidiaDevicePlugin 5.49
53 TestAddons/parallel/Yakd 10.87
55 TestCertOptions 59.51
56 TestCertExpiration 259.22
58 TestForceSystemdFlag 71.39
59 TestForceSystemdEnv 41.54
61 TestKVMDriverInstallOrUpdate 4.54
65 TestErrorSpam/setup 38.38
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.46
69 TestErrorSpam/unpause 1.53
70 TestErrorSpam/stop 4.56
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 55.39
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 36.41
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
82 TestFunctional/serial/CacheCmd/cache/add_local 2.06
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
87 TestFunctional/serial/CacheCmd/cache/delete 0.08
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 31.61
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.34
93 TestFunctional/serial/LogsFileCmd 1.36
94 TestFunctional/serial/InvalidService 4.02
96 TestFunctional/parallel/ConfigCmd 0.3
97 TestFunctional/parallel/DashboardCmd 11.07
98 TestFunctional/parallel/DryRun 0.35
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.81
104 TestFunctional/parallel/ServiceCmdConnect 8.67
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 46.46
108 TestFunctional/parallel/SSHCmd 0.44
109 TestFunctional/parallel/CpCmd 1.29
110 TestFunctional/parallel/MySQL 23.77
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.41
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
120 TestFunctional/parallel/License 0.56
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.63
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
128 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
129 TestFunctional/parallel/ImageCommands/Setup 1.75
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.29
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.29
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.66
146 TestFunctional/parallel/ImageCommands/ImageRemove 1.26
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.25
148 TestFunctional/parallel/ServiceCmd/List 0.32
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
151 TestFunctional/parallel/ServiceCmd/Format 0.29
152 TestFunctional/parallel/ServiceCmd/URL 0.35
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
154 TestFunctional/parallel/ProfileCmd/profile_list 0.29
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
156 TestFunctional/parallel/MountCmd/any-port 17.05
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
158 TestFunctional/parallel/MountCmd/specific-port 1.67
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 202.11
167 TestMultiControlPlane/serial/DeployApp 6.33
168 TestMultiControlPlane/serial/PingHostFromPods 1.2
169 TestMultiControlPlane/serial/AddWorkerNode 56.98
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.43
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.01
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 341.84
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 78.48
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
188 TestJSONOutput/start/Command 55.79
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.7
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.63
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.37
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 84.02
220 TestMountStart/serial/StartWithMountFirst 24.62
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 27.71
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.86
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 22.44
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 119.28
232 TestMultiNode/serial/DeployApp2Nodes 5.82
233 TestMultiNode/serial/PingHostFrom2Pods 0.74
234 TestMultiNode/serial/AddNode 47.68
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 6.98
238 TestMultiNode/serial/StopNode 2.19
239 TestMultiNode/serial/StartAfterStop 38.92
241 TestMultiNode/serial/DeleteNode 2.37
243 TestMultiNode/serial/RestartMultiNode 182.81
244 TestMultiNode/serial/ValidateNameConflict 40.69
251 TestScheduledStopUnix 110.37
255 TestRunningBinaryUpgrade 215.79
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 85.85
262 TestNoKubernetes/serial/StartWithStopK8s 10.5
271 TestPause/serial/Start 99.1
272 TestNoKubernetes/serial/Start 51.48
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
274 TestNoKubernetes/serial/ProfileList 1.53
275 TestNoKubernetes/serial/Stop 1.27
276 TestNoKubernetes/serial/StartNoArgs 22.32
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
278 TestStoppedBinaryUpgrade/Setup 2.27
279 TestStoppedBinaryUpgrade/Upgrade 105.7
288 TestNetworkPlugins/group/false 3.21
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
296 TestStartStop/group/no-preload/serial/FirstStart 122
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.44
299 TestStartStop/group/no-preload/serial/DeployApp 11.31
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
306 TestStartStop/group/newest-cni/serial/FirstStart 48.45
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
309 TestStartStop/group/newest-cni/serial/Stop 10.44
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
311 TestStartStop/group/newest-cni/serial/SecondStart 38.52
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
317 TestStartStop/group/newest-cni/serial/Pause 3.98
320 TestStartStop/group/embed-certs/serial/FirstStart 59.67
321 TestStartStop/group/no-preload/serial/SecondStart 661.56
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 568.73
324 TestStartStop/group/embed-certs/serial/DeployApp 9.25
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
327 TestStartStop/group/old-k8s-version/serial/Stop 4.28
328 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
331 TestStartStop/group/embed-certs/serial/SecondStart 422.17
340 TestNetworkPlugins/group/auto/Start 95.99
341 TestNetworkPlugins/group/kindnet/Start 75.35
342 TestNetworkPlugins/group/auto/KubeletFlags 0.2
343 TestNetworkPlugins/group/auto/NetCatPod 11.23
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
346 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
347 TestNetworkPlugins/group/auto/DNS 0.16
348 TestNetworkPlugins/group/auto/Localhost 0.13
349 TestNetworkPlugins/group/auto/HairPin 0.12
350 TestNetworkPlugins/group/kindnet/DNS 0.18
351 TestNetworkPlugins/group/calico/Start 88.33
352 TestNetworkPlugins/group/kindnet/Localhost 0.19
353 TestNetworkPlugins/group/kindnet/HairPin 0.14
354 TestNetworkPlugins/group/custom-flannel/Start 103.27
355 TestNetworkPlugins/group/enable-default-cni/Start 92.11
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.21
358 TestNetworkPlugins/group/calico/NetCatPod 12.25
359 TestNetworkPlugins/group/calico/DNS 0.19
360 TestNetworkPlugins/group/calico/Localhost 0.14
361 TestNetworkPlugins/group/calico/HairPin 0.14
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
366 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
367 TestStartStop/group/embed-certs/serial/Pause 3.21
368 TestNetworkPlugins/group/flannel/Start 84.46
369 TestNetworkPlugins/group/enable-default-cni/DNS 32.74
370 TestNetworkPlugins/group/custom-flannel/DNS 0.2
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
373 TestNetworkPlugins/group/bridge/Start 76.43
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
378 TestNetworkPlugins/group/bridge/NetCatPod 10.2
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
380 TestNetworkPlugins/group/flannel/NetCatPod 10.21
381 TestNetworkPlugins/group/bridge/DNS 0.17
382 TestNetworkPlugins/group/bridge/Localhost 0.11
383 TestNetworkPlugins/group/bridge/HairPin 0.12
384 TestNetworkPlugins/group/flannel/DNS 0.15
385 TestNetworkPlugins/group/flannel/Localhost 0.11
386 TestNetworkPlugins/group/flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (23.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-048310 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-048310 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.757140474s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-048310
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-048310: exit status 85 (54.899432ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-048310 | jenkins | v1.33.1 | 25 Jul 24 17:28 UTC |          |
	|         | -p download-only-048310        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:28:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:28:45.993456   13070 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:28:45.993563   13070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:28:45.993571   13070 out.go:304] Setting ErrFile to fd 2...
	I0725 17:28:45.993575   13070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:28:45.993731   13070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	W0725 17:28:45.993840   13070 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19326-5877/.minikube/config/config.json: open /home/jenkins/minikube-integration/19326-5877/.minikube/config/config.json: no such file or directory
	I0725 17:28:45.994398   13070 out.go:298] Setting JSON to true
	I0725 17:28:45.995244   13070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":670,"bootTime":1721927856,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:28:45.995304   13070 start.go:139] virtualization: kvm guest
	I0725 17:28:45.997836   13070 out.go:97] [download-only-048310] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0725 17:28:45.997966   13070 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 17:28:45.998005   13070 notify.go:220] Checking for updates...
	I0725 17:28:45.999493   13070 out.go:169] MINIKUBE_LOCATION=19326
	I0725 17:28:46.000983   13070 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:28:46.002366   13070 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:28:46.003795   13070 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:28:46.005041   13070 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0725 17:28:46.007296   13070 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 17:28:46.007566   13070 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:28:46.103443   13070 out.go:97] Using the kvm2 driver based on user configuration
	I0725 17:28:46.103477   13070 start.go:297] selected driver: kvm2
	I0725 17:28:46.103486   13070 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:28:46.103858   13070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:28:46.103984   13070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:28:46.118629   13070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:28:46.118692   13070 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:28:46.119175   13070 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0725 17:28:46.119340   13070 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 17:28:46.119415   13070 cni.go:84] Creating CNI manager for ""
	I0725 17:28:46.119431   13070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:28:46.119441   13070 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 17:28:46.119505   13070 start.go:340] cluster config:
	{Name:download-only-048310 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-048310 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:28:46.119722   13070 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:28:46.121709   13070 out.go:97] Downloading VM boot image ...
	I0725 17:28:46.121752   13070 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0725 17:28:55.482969   13070 out.go:97] Starting "download-only-048310" primary control-plane node in "download-only-048310" cluster
	I0725 17:28:55.483002   13070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 17:28:55.580462   13070 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0725 17:28:55.580498   13070 cache.go:56] Caching tarball of preloaded images
	I0725 17:28:55.580664   13070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0725 17:28:55.582595   13070 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0725 17:28:55.582618   13070 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0725 17:28:55.684492   13070 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-048310 host does not exist
	  To start a cluster, run: "minikube start -p download-only-048310"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-048310
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-170797 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-170797 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.13015636s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-170797
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-170797: exit status 85 (54.012124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-048310 | jenkins | v1.33.1 | 25 Jul 24 17:28 UTC |                     |
	|         | -p download-only-048310        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| delete  | -p download-only-048310        | download-only-048310 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| start   | -o=json --download-only        | download-only-170797 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | -p download-only-170797        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:29:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:29:10.056009   13328 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:29:10.056138   13328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:10.056148   13328 out.go:304] Setting ErrFile to fd 2...
	I0725 17:29:10.056154   13328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:10.056366   13328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:29:10.056911   13328 out.go:298] Setting JSON to true
	I0725 17:29:10.057681   13328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":694,"bootTime":1721927856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:29:10.057734   13328 start.go:139] virtualization: kvm guest
	I0725 17:29:10.059659   13328 out.go:97] [download-only-170797] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:29:10.059812   13328 notify.go:220] Checking for updates...
	I0725 17:29:10.060891   13328 out.go:169] MINIKUBE_LOCATION=19326
	I0725 17:29:10.062009   13328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:29:10.063267   13328 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:29:10.064344   13328 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:10.065542   13328 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0725 17:29:10.067784   13328 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 17:29:10.067959   13328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:29:10.098871   13328 out.go:97] Using the kvm2 driver based on user configuration
	I0725 17:29:10.098900   13328 start.go:297] selected driver: kvm2
	I0725 17:29:10.098907   13328 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:29:10.099210   13328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:10.099272   13328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:29:10.113604   13328 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:29:10.113646   13328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:29:10.114090   13328 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0725 17:29:10.114230   13328 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 17:29:10.114256   13328 cni.go:84] Creating CNI manager for ""
	I0725 17:29:10.114263   13328 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:29:10.114274   13328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 17:29:10.114319   13328 start.go:340] cluster config:
	{Name:download-only-170797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-170797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:29:10.114427   13328 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:10.116186   13328 out.go:97] Starting "download-only-170797" primary control-plane node in "download-only-170797" cluster
	I0725 17:29:10.116215   13328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:10.619482   13328 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0725 17:29:10.619520   13328 cache.go:56] Caching tarball of preloaded images
	I0725 17:29:10.619683   13328 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0725 17:29:10.621570   13328 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0725 17:29:10.621591   13328 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0725 17:29:10.727359   13328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-170797 host does not exist
	  To start a cluster, run: "minikube start -p download-only-170797"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-170797
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (11.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-108558 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-108558 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.733661581s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (11.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-108558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-108558: exit status 85 (57.45707ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-048310 | jenkins | v1.33.1 | 25 Jul 24 17:28 UTC |                     |
	|         | -p download-only-048310             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| delete  | -p download-only-048310             | download-only-048310 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| start   | -o=json --download-only             | download-only-170797 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | -p download-only-170797             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| delete  | -p download-only-170797             | download-only-170797 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC | 25 Jul 24 17:29 UTC |
	| start   | -o=json --download-only             | download-only-108558 | jenkins | v1.33.1 | 25 Jul 24 17:29 UTC |                     |
	|         | -p download-only-108558             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 17:29:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 17:29:22.486746   13530 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:29:22.486959   13530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:22.486967   13530 out.go:304] Setting ErrFile to fd 2...
	I0725 17:29:22.486971   13530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:29:22.487142   13530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:29:22.487659   13530 out.go:298] Setting JSON to true
	I0725 17:29:22.488485   13530 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":706,"bootTime":1721927856,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:29:22.488541   13530 start.go:139] virtualization: kvm guest
	I0725 17:29:22.490524   13530 out.go:97] [download-only-108558] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:29:22.490674   13530 notify.go:220] Checking for updates...
	I0725 17:29:22.492066   13530 out.go:169] MINIKUBE_LOCATION=19326
	I0725 17:29:22.493619   13530 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:29:22.494804   13530 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:29:22.496094   13530 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:29:22.497234   13530 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0725 17:29:22.499494   13530 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 17:29:22.499714   13530 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:29:22.531207   13530 out.go:97] Using the kvm2 driver based on user configuration
	I0725 17:29:22.531242   13530 start.go:297] selected driver: kvm2
	I0725 17:29:22.531251   13530 start.go:901] validating driver "kvm2" against <nil>
	I0725 17:29:22.531586   13530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:22.531673   13530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19326-5877/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0725 17:29:22.546994   13530 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0725 17:29:22.547057   13530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 17:29:22.547542   13530 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0725 17:29:22.547688   13530 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 17:29:22.547714   13530 cni.go:84] Creating CNI manager for ""
	I0725 17:29:22.547725   13530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0725 17:29:22.547737   13530 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 17:29:22.547803   13530 start.go:340] cluster config:
	{Name:download-only-108558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-108558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:29:22.547959   13530 iso.go:125] acquiring lock: {Name:mk0da515ffe64c6b3d23819e6e10f3a7aeecda7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 17:29:22.549732   13530 out.go:97] Starting "download-only-108558" primary control-plane node in "download-only-108558" cluster
	I0725 17:29:22.549751   13530 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 17:29:23.052177   13530 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0725 17:29:23.052214   13530 cache.go:56] Caching tarball of preloaded images
	I0725 17:29:23.052411   13530 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0725 17:29:23.054198   13530 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0725 17:29:23.054214   13530 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0725 17:29:23.150074   13530 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19326-5877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-108558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-108558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-108558
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-606783 --alsologtostderr --binary-mirror http://127.0.0.1:44459 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-606783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-606783
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (95.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-872594 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-872594 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.720568193s)
helpers_test.go:175: Cleaning up "offline-crio-872594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-872594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-872594: (1.10366952s)
--- PASS: TestOffline (95.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-377932
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-377932: exit status 85 (44.751272ms)

                                                
                                                
-- stdout --
	* Profile "addons-377932" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377932"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-377932
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-377932: exit status 85 (46.243733ms)

                                                
                                                
-- stdout --
	* Profile "addons-377932" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377932"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (142.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-377932 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-377932 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.819647433s)
--- PASS: TestAddons/Setup (142.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-377932 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-377932 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.710482ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-rkw7r" [c0a7b843-4a5e-4647-b7cb-7dd968ac91e1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.182442897s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d8vdg" [83703257-9ba2-4749-b11e-965f7b8f4403] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004681794s
addons_test.go:342: (dbg) Run:  kubectl --context addons-377932 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-377932 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-377932 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.00917531s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 ip
2024/07/25 17:32:39 [DEBUG] GET http://192.168.39.150:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4bdx5" [90ff6848-f083-4199-a5e1-617f7d255e67] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005607215s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-377932
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-377932: (5.947197614s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.155437ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-gzwvc" [404a7d43-869c-4137-b5a9-e4f4ce531f65] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004507562s
addons_test.go:475: (dbg) Run:  kubectl --context addons-377932 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-377932 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.181439531s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.015127ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-377932 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-377932 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [109d0c87-194b-442f-a685-a4f47254128c] Pending
helpers_test.go:344: "task-pv-pod" [109d0c87-194b-442f-a685-a4f47254128c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [109d0c87-194b-442f-a685-a4f47254128c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003632188s
addons_test.go:590: (dbg) Run:  kubectl --context addons-377932 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-377932 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-377932 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-377932 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-377932 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-377932 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-377932 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dc4a6a30-f2d8-45fc-8dd3-f43eea37f254] Pending
helpers_test.go:344: "task-pv-pod-restore" [dc4a6a30-f2d8-45fc-8dd3-f43eea37f254] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dc4a6a30-f2d8-45fc-8dd3-f43eea37f254] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004580295s
addons_test.go:632: (dbg) Run:  kubectl --context addons-377932 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-377932 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-377932 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.70843812s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-377932 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-fhj6c" [69a7ddbf-293f-40e8-9896-20cf181dacb1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-fhj6c" [69a7ddbf-293f-40e8-9896-20cf181dacb1] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003807905s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable headlamp --alsologtostderr -v=1: (5.766581403s)
--- PASS: TestAddons/parallel/Headlamp (19.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-c6pz7" [15c3842e-bdf0-4077-8099-f208d8f559d4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004139567s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-377932
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-377932 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-377932 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c5b117c7-d7f8-4fca-a938-67da43863955] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c5b117c7-d7f8-4fca-a938-67da43863955] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c5b117c7-d7f8-4fca-a938-67da43863955] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003531179s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-377932 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 ssh "cat /opt/local-path-provisioner/pvc-21933440-c7fa-4b82-89b2-60e7bd69bee6_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-377932 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-377932 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.257886025s)
--- PASS: TestAddons/parallel/LocalPath (55.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g4wdw" [33f0f28c-f9cb-4e40-8b85-364dac249c2b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004475149s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-377932
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-dcg7m" [04ce6653-73f3-4ef8-88b8-53d5cab3958a] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.16619783s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-377932 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-377932 addons disable yakd --alsologtostderr -v=1: (5.700936792s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestCertOptions (59.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-091318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-091318 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (58.287118905s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-091318 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-091318 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-091318 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-091318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-091318
--- PASS: TestCertOptions (59.51s)

                                                
                                    
x
+
TestCertExpiration (259.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-979261 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-979261 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (39.385079715s)
E0725 18:38:55.102229   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-979261 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-979261 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.843688687s)
helpers_test.go:175: Cleaning up "cert-expiration-979261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-979261
--- PASS: TestCertExpiration (259.22s)

                                                
                                    
x
+
TestForceSystemdFlag (71.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-267077 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-267077 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.192973379s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-267077 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-267077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-267077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-267077: (1.00480346s)
--- PASS: TestForceSystemdFlag (71.39s)

                                                
                                    
x
+
TestForceSystemdEnv (41.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-207395 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-207395 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.49565766s)
helpers_test.go:175: Cleaning up "force-systemd-env-207395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-207395
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-207395: (1.045521341s)
--- PASS: TestForceSystemdEnv (41.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.54s)

                                                
                                    
x
+
TestErrorSpam/setup (38.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-830486 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-830486 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-830486 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-830486 --driver=kvm2  --container-runtime=crio: (38.384431889s)
--- PASS: TestErrorSpam/setup (38.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (4.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop: (1.508163438s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop: (1.586163592s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-830486 --log_dir /tmp/nospam-830486 stop: (1.462607504s)
--- PASS: TestErrorSpam/stop (4.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19326-5877/.minikube/files/etc/test/nested/copy/13059/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0725 17:41:58.592020   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.597925   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.608171   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.628376   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.668676   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.749008   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:58.909425   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:59.229980   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:41:59.870922   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:42:01.151416   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:42:03.713184   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:42:08.833562   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:42:19.073892   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:42:39.554713   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-896905 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.386762381s)
--- PASS: TestFunctional/serial/StartWithProxy (55.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --alsologtostderr -v=8
E0725 17:43:20.515828   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-896905 --alsologtostderr -v=8: (36.406820852s)
functional_test.go:659: soft start took 36.40733806s for "functional-896905" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-896905 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:3.1: (1.116235622s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:3.3: (1.264452306s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 cache add registry.k8s.io/pause:latest: (1.217741977s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-896905 /tmp/TestFunctionalserialCacheCmdcacheadd_local913686819/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache add minikube-local-cache-test:functional-896905
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 cache add minikube-local-cache-test:functional-896905: (1.7210749s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache delete minikube-local-cache-test:functional-896905
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-896905
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (213.469674ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 kubectl -- --context functional-896905 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-896905 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-896905 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.605246008s)
functional_test.go:757: restart took 31.605333166s for "functional-896905" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-896905 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 logs: (1.338479841s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 logs --file /tmp/TestFunctionalserialLogsFileCmd3937834808/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 logs --file /tmp/TestFunctionalserialLogsFileCmd3937834808/001/logs.txt: (1.355786624s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-896905 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-896905
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-896905: exit status 115 (263.101681ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.106:32208 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-896905 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 config get cpus: exit status 14 (49.200952ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 config get cpus: exit status 14 (44.184138ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896905 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-896905 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22948: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896905 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (224.854545ms)

                                                
                                                
-- stdout --
	* [functional-896905] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:44:30.385742   22521 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:44:30.385842   22521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:44:30.385851   22521 out.go:304] Setting ErrFile to fd 2...
	I0725 17:44:30.385855   22521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:44:30.386070   22521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:44:30.386645   22521 out.go:298] Setting JSON to false
	I0725 17:44:30.387635   22521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1614,"bootTime":1721927856,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:44:30.387692   22521 start.go:139] virtualization: kvm guest
	I0725 17:44:30.389826   22521 out.go:177] * [functional-896905] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 17:44:30.391091   22521 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:44:30.391146   22521 notify.go:220] Checking for updates...
	I0725 17:44:30.393320   22521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:44:30.394687   22521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:44:30.395798   22521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:44:30.396777   22521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:44:30.397791   22521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:44:30.399301   22521 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:44:30.399774   22521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:44:30.399807   22521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:44:30.415441   22521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0725 17:44:30.415821   22521 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:44:30.416381   22521 main.go:141] libmachine: Using API Version  1
	I0725 17:44:30.416409   22521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:44:30.416731   22521 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:44:30.416919   22521 main.go:141] libmachine: (functional-896905) Calling .DriverName
	I0725 17:44:30.417166   22521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:44:30.417428   22521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:44:30.417459   22521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:44:30.432039   22521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41071
	I0725 17:44:30.432488   22521 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:44:30.432976   22521 main.go:141] libmachine: Using API Version  1
	I0725 17:44:30.433004   22521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:44:30.433317   22521 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:44:30.433476   22521 main.go:141] libmachine: (functional-896905) Calling .DriverName
	I0725 17:44:30.474095   22521 out.go:177] * Using the kvm2 driver based on existing profile
	I0725 17:44:30.475661   22521 start.go:297] selected driver: kvm2
	I0725 17:44:30.475677   22521 start.go:901] validating driver "kvm2" against &{Name:functional-896905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-896905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:44:30.475839   22521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:44:30.540500   22521 out.go:177] 
	W0725 17:44:30.542119   22521 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 17:44:30.543622   22521 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-896905 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-896905 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (125.548419ms)

                                                
                                                
-- stdout --
	* [functional-896905] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 17:44:30.730815   22577 out.go:291] Setting OutFile to fd 1 ...
	I0725 17:44:30.730901   22577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:44:30.730906   22577 out.go:304] Setting ErrFile to fd 2...
	I0725 17:44:30.730910   22577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 17:44:30.731180   22577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 17:44:30.731683   22577 out.go:298] Setting JSON to false
	I0725 17:44:30.732564   22577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1615,"bootTime":1721927856,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 17:44:30.732622   22577 start.go:139] virtualization: kvm guest
	I0725 17:44:30.734693   22577 out.go:177] * [functional-896905] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0725 17:44:30.736181   22577 notify.go:220] Checking for updates...
	I0725 17:44:30.736237   22577 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 17:44:30.737778   22577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 17:44:30.739124   22577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 17:44:30.740489   22577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 17:44:30.741671   22577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 17:44:30.742789   22577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 17:44:30.744413   22577 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 17:44:30.744778   22577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:44:30.744853   22577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:44:30.759343   22577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38365
	I0725 17:44:30.759736   22577 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:44:30.760231   22577 main.go:141] libmachine: Using API Version  1
	I0725 17:44:30.760251   22577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:44:30.760603   22577 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:44:30.760818   22577 main.go:141] libmachine: (functional-896905) Calling .DriverName
	I0725 17:44:30.761095   22577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 17:44:30.761447   22577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 17:44:30.761483   22577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 17:44:30.776248   22577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0725 17:44:30.776670   22577 main.go:141] libmachine: () Calling .GetVersion
	I0725 17:44:30.777202   22577 main.go:141] libmachine: Using API Version  1
	I0725 17:44:30.777223   22577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 17:44:30.777617   22577 main.go:141] libmachine: () Calling .GetMachineName
	I0725 17:44:30.777788   22577 main.go:141] libmachine: (functional-896905) Calling .DriverName
	I0725 17:44:30.809396   22577 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0725 17:44:30.810706   22577 start.go:297] selected driver: kvm2
	I0725 17:44:30.810720   22577 start.go:901] validating driver "kvm2" against &{Name:functional-896905 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-896905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 17:44:30.810843   22577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 17:44:30.813048   22577 out.go:177] 
	W0725 17:44:30.814498   22577 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 17:44:30.815817   22577 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-896905 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-896905 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-zjhnk" [8f12bf08-c81f-4e7c-8b6f-b4a81e60d351] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-zjhnk" [8f12bf08-c81f-4e7c-8b6f-b4a81e60d351] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00404713s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.106:31179
functional_test.go:1671: http://192.168.39.106:31179: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-zjhnk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.106:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.106:31179
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [874ecb4c-70b6-4d8d-8890-29678270c518] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004644327s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-896905 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-896905 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-896905 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-896905 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-896905 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fdb8b80b-2ac0-40d9-9f26-32e9d57df14c] Pending
helpers_test.go:344: "sp-pod" [fdb8b80b-2ac0-40d9-9f26-32e9d57df14c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fdb8b80b-2ac0-40d9-9f26-32e9d57df14c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004354135s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-896905 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-896905 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-896905 delete -f testdata/storage-provisioner/pod.yaml: (1.476911567s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-896905 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff1710d0-a32d-4b58-89eb-2ef0e543c815] Pending
helpers_test.go:344: "sp-pod" [ff1710d0-a32d-4b58-89eb-2ef0e543c815] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff1710d0-a32d-4b58-89eb-2ef0e543c815] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004238943s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-896905 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh -n functional-896905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cp functional-896905:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd110175370/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh -n functional-896905 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh -n functional-896905 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-896905 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-zgpns" [65b9b5f1-46c7-4238-a839-10beb3847a0a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-zgpns" [65b9b5f1-46c7-4238-a839-10beb3847a0a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004413914s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-896905 exec mysql-64454c8b5c-zgpns -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-896905 exec mysql-64454c8b5c-zgpns -- mysql -ppassword -e "show databases;": exit status 1 (134.553763ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-896905 exec mysql-64454c8b5c-zgpns -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-896905 exec mysql-64454c8b5c-zgpns -- mysql -ppassword -e "show databases;": exit status 1 (139.97978ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-896905 exec mysql-64454c8b5c-zgpns -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13059/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /etc/test/nested/copy/13059/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13059.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /etc/ssl/certs/13059.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13059.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /usr/share/ca-certificates/13059.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/130592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /etc/ssl/certs/130592.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/130592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /usr/share/ca-certificates/130592.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-896905 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "sudo systemctl is-active docker": exit status 1 (199.770439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "sudo systemctl is-active containerd": exit status 1 (207.769771ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-896905 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-896905 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-4z6ds" [8b893eef-cb77-4850-8a41-c3ee091f6826] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-4z6ds" [8b893eef-cb77-4850-8a41-c3ee091f6826] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003425731s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896905 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-896905
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-896905
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896905 image ls --format short --alsologtostderr:
I0725 17:44:40.577254   22900 out.go:291] Setting OutFile to fd 1 ...
I0725 17:44:40.577472   22900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:40.577480   22900 out.go:304] Setting ErrFile to fd 2...
I0725 17:44:40.577484   22900 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:40.577642   22900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
I0725 17:44:40.578134   22900 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:40.578221   22900 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:40.578579   22900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:40.578623   22900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:40.593558   22900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
I0725 17:44:40.593989   22900 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:40.594497   22900 main.go:141] libmachine: Using API Version  1
I0725 17:44:40.594521   22900 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:40.594816   22900 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:40.595016   22900 main.go:141] libmachine: (functional-896905) Calling .GetState
I0725 17:44:40.596937   22900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:40.597002   22900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:40.611542   22900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
I0725 17:44:40.611976   22900 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:40.612524   22900 main.go:141] libmachine: Using API Version  1
I0725 17:44:40.612545   22900 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:40.612844   22900 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:40.613044   22900 main.go:141] libmachine: (functional-896905) Calling .DriverName
I0725 17:44:40.613246   22900 ssh_runner.go:195] Run: systemctl --version
I0725 17:44:40.613273   22900 main.go:141] libmachine: (functional-896905) Calling .GetSSHHostname
I0725 17:44:40.616318   22900 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:40.616813   22900 main.go:141] libmachine: (functional-896905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fa:41", ip: ""} in network mk-functional-896905: {Iface:virbr1 ExpiryTime:2024-07-25 18:42:07 +0000 UTC Type:0 Mac:52:54:00:67:fa:41 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-896905 Clientid:01:52:54:00:67:fa:41}
I0725 17:44:40.616841   22900 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined IP address 192.168.39.106 and MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:40.617043   22900 main.go:141] libmachine: (functional-896905) Calling .GetSSHPort
I0725 17:44:40.617185   22900 main.go:141] libmachine: (functional-896905) Calling .GetSSHKeyPath
I0725 17:44:40.617352   22900 main.go:141] libmachine: (functional-896905) Calling .GetSSHUsername
I0725 17:44:40.617510   22900 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/functional-896905/id_rsa Username:docker}
I0725 17:44:40.699302   22900 ssh_runner.go:195] Run: sudo crictl images --output json
I0725 17:44:40.771905   22900 main.go:141] libmachine: Making call to close driver server
I0725 17:44:40.771929   22900 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:40.772175   22900 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:40.772202   22900 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:40.772212   22900 main.go:141] libmachine: Making call to close driver server
I0725 17:44:40.772220   22900 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:40.772465   22900 main.go:141] libmachine: (functional-896905) DBG | Closing plugin on server side
I0725 17:44:40.772483   22900 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:40.772499   22900 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896905 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-896905  | bbae57f269822 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kicbase/echo-server           | functional-896905  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/my-image                      | functional-896905  | 1f285ba827df0 | 1.47MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896905 image ls --format table --alsologtostderr:
I0725 17:44:45.215109   23439 out.go:291] Setting OutFile to fd 1 ...
I0725 17:44:45.215237   23439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:45.215247   23439 out.go:304] Setting ErrFile to fd 2...
I0725 17:44:45.215253   23439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:45.215449   23439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
I0725 17:44:45.216009   23439 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:45.216122   23439 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:45.216538   23439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:45.216595   23439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:45.231740   23439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
I0725 17:44:45.232162   23439 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:45.232858   23439 main.go:141] libmachine: Using API Version  1
I0725 17:44:45.232885   23439 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:45.233215   23439 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:45.233442   23439 main.go:141] libmachine: (functional-896905) Calling .GetState
I0725 17:44:45.235438   23439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:45.235477   23439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:45.250291   23439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
I0725 17:44:45.250689   23439 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:45.251158   23439 main.go:141] libmachine: Using API Version  1
I0725 17:44:45.251183   23439 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:45.251583   23439 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:45.251805   23439 main.go:141] libmachine: (functional-896905) Calling .DriverName
I0725 17:44:45.252032   23439 ssh_runner.go:195] Run: systemctl --version
I0725 17:44:45.252053   23439 main.go:141] libmachine: (functional-896905) Calling .GetSSHHostname
I0725 17:44:45.254728   23439 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:45.255126   23439 main.go:141] libmachine: (functional-896905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fa:41", ip: ""} in network mk-functional-896905: {Iface:virbr1 ExpiryTime:2024-07-25 18:42:07 +0000 UTC Type:0 Mac:52:54:00:67:fa:41 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-896905 Clientid:01:52:54:00:67:fa:41}
I0725 17:44:45.255150   23439 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined IP address 192.168.39.106 and MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:45.255296   23439 main.go:141] libmachine: (functional-896905) Calling .GetSSHPort
I0725 17:44:45.255448   23439 main.go:141] libmachine: (functional-896905) Calling .GetSSHKeyPath
I0725 17:44:45.255584   23439 main.go:141] libmachine: (functional-896905) Calling .GetSSHUsername
I0725 17:44:45.255730   23439 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/functional-896905/id_rsa Username:docker}
I0725 17:44:45.393499   23439 ssh_runner.go:195] Run: sudo crictl images --output json
I0725 17:44:45.461025   23439 main.go:141] libmachine: Making call to close driver server
I0725 17:44:45.461049   23439 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:45.461334   23439 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:45.461355   23439 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:45.461372   23439 main.go:141] libmachine: Making call to close driver server
I0725 17:44:45.461380   23439 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:45.461596   23439 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:45.461609   23439 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:45.461618   23439 main.go:141] libmachine: (functional-896905) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896905 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-896905"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d
166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"1f285ba827df0b27c06c22e7926541eb08042a26146e1c625602257a960e3a73","repoDigests":["localhost/my-image@sha256:f09f5f978feb4dc1d07ee0ea92b0647080f3026ed1ed551492a7454de133ad31"],"repoTags":["localhost/my-image:functional-896905"],"size":"1468599"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d46
3f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kind
est/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"79c0e7fad74018c19d8a5cc5758772e84d9e010f4146943d39d116fec16c909b","repoDigests":["docker.io/library/cf5169b84933b66a692c5389801b73f06b8f3af0d1629ff1d0c8817496eaeee3-tmp@sha256:3b5654ed2e47ad93789ac34d379b3cc42422a29a9ebb430a8cba1dc898517128"],"repoTags":[],"size":"1466018"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"bbae57f2698223949a45148996c408d7491a2c0a4806e573e92368ab84ec552f","repoDigests":["localhost/minikube-local-cache-test@sha256:4fae8f69c0f8f5007ac2229722231dc8bd0832bd3cd026202b6c757b8d431901"],"repoTags":["localhost/minikube-local-cache-test:functional-896905"],"size":"3330"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856
f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987
919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cbb01a7bd410dc
08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896905 image ls --format json --alsologtostderr:
I0725 17:44:44.916915   23322 out.go:291] Setting OutFile to fd 1 ...
I0725 17:44:44.917033   23322 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:44.917044   23322 out.go:304] Setting ErrFile to fd 2...
I0725 17:44:44.917050   23322 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:44.917373   23322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
I0725 17:44:44.918077   23322 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:44.918180   23322 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:44.918533   23322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:44.918579   23322 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:44.936788   23322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
I0725 17:44:44.937295   23322 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:44.937903   23322 main.go:141] libmachine: Using API Version  1
I0725 17:44:44.937927   23322 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:44.938304   23322 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:44.938499   23322 main.go:141] libmachine: (functional-896905) Calling .GetState
I0725 17:44:44.940585   23322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:44.940670   23322 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:44.957238   23322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
I0725 17:44:44.957593   23322 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:44.958137   23322 main.go:141] libmachine: Using API Version  1
I0725 17:44:44.958176   23322 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:44.958517   23322 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:44.958727   23322 main.go:141] libmachine: (functional-896905) Calling .DriverName
I0725 17:44:44.958945   23322 ssh_runner.go:195] Run: systemctl --version
I0725 17:44:44.958971   23322 main.go:141] libmachine: (functional-896905) Calling .GetSSHHostname
I0725 17:44:44.962190   23322 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:44.962627   23322 main.go:141] libmachine: (functional-896905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fa:41", ip: ""} in network mk-functional-896905: {Iface:virbr1 ExpiryTime:2024-07-25 18:42:07 +0000 UTC Type:0 Mac:52:54:00:67:fa:41 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-896905 Clientid:01:52:54:00:67:fa:41}
I0725 17:44:44.962658   23322 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined IP address 192.168.39.106 and MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:44.962806   23322 main.go:141] libmachine: (functional-896905) Calling .GetSSHPort
I0725 17:44:44.962977   23322 main.go:141] libmachine: (functional-896905) Calling .GetSSHKeyPath
I0725 17:44:44.963107   23322 main.go:141] libmachine: (functional-896905) Calling .GetSSHUsername
I0725 17:44:44.963297   23322 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/functional-896905/id_rsa Username:docker}
I0725 17:44:45.085964   23322 ssh_runner.go:195] Run: sudo crictl images --output json
I0725 17:44:45.166695   23322 main.go:141] libmachine: Making call to close driver server
I0725 17:44:45.166711   23322 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:45.166962   23322 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:45.166988   23322 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:45.167003   23322 main.go:141] libmachine: Making call to close driver server
I0725 17:44:45.167015   23322 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:45.168476   23322 main.go:141] libmachine: (functional-896905) DBG | Closing plugin on server side
I0725 17:44:45.168557   23322 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:45.168601   23322 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896905 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-896905
size: "4943877"
- id: bbae57f2698223949a45148996c408d7491a2c0a4806e573e92368ab84ec552f
repoDigests:
- localhost/minikube-local-cache-test@sha256:4fae8f69c0f8f5007ac2229722231dc8bd0832bd3cd026202b6c757b8d431901
repoTags:
- localhost/minikube-local-cache-test:functional-896905
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896905 image ls --format yaml --alsologtostderr:
I0725 17:44:40.817750   22923 out.go:291] Setting OutFile to fd 1 ...
I0725 17:44:40.817877   22923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:40.817886   22923 out.go:304] Setting ErrFile to fd 2...
I0725 17:44:40.817891   22923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:40.818074   22923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
I0725 17:44:40.818651   22923 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:40.818768   22923 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:40.819106   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:40.819148   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:40.833650   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
I0725 17:44:40.834099   22923 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:40.834685   22923 main.go:141] libmachine: Using API Version  1
I0725 17:44:40.834711   22923 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:40.835059   22923 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:40.835278   22923 main.go:141] libmachine: (functional-896905) Calling .GetState
I0725 17:44:40.837386   22923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:40.837422   22923 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:40.851795   22923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
I0725 17:44:40.852272   22923 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:40.852826   22923 main.go:141] libmachine: Using API Version  1
I0725 17:44:40.852847   22923 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:40.853166   22923 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:40.853377   22923 main.go:141] libmachine: (functional-896905) Calling .DriverName
I0725 17:44:40.853609   22923 ssh_runner.go:195] Run: systemctl --version
I0725 17:44:40.853640   22923 main.go:141] libmachine: (functional-896905) Calling .GetSSHHostname
I0725 17:44:40.856742   22923 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:40.857170   22923 main.go:141] libmachine: (functional-896905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fa:41", ip: ""} in network mk-functional-896905: {Iface:virbr1 ExpiryTime:2024-07-25 18:42:07 +0000 UTC Type:0 Mac:52:54:00:67:fa:41 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-896905 Clientid:01:52:54:00:67:fa:41}
I0725 17:44:40.857208   22923 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined IP address 192.168.39.106 and MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:40.857385   22923 main.go:141] libmachine: (functional-896905) Calling .GetSSHPort
I0725 17:44:40.857529   22923 main.go:141] libmachine: (functional-896905) Calling .GetSSHKeyPath
I0725 17:44:40.857686   22923 main.go:141] libmachine: (functional-896905) Calling .GetSSHUsername
I0725 17:44:40.857819   22923 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/functional-896905/id_rsa Username:docker}
I0725 17:44:40.942644   22923 ssh_runner.go:195] Run: sudo crictl images --output json
I0725 17:44:40.978814   22923 main.go:141] libmachine: Making call to close driver server
I0725 17:44:40.978829   22923 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:40.979071   22923 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:40.979089   22923 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:40.979103   22923 main.go:141] libmachine: Making call to close driver server
I0725 17:44:40.979112   22923 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:40.979329   22923 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:40.979342   22923 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh pgrep buildkitd: exit status 1 (247.447428ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image build -t localhost/my-image:functional-896905 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 image build -t localhost/my-image:functional-896905 testdata/build --alsologtostderr: (3.276172452s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-896905 image build -t localhost/my-image:functional-896905 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 79c0e7fad74
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-896905
--> 1f285ba827d
Successfully tagged localhost/my-image:functional-896905
1f285ba827df0b27c06c22e7926541eb08042a26146e1c625602257a960e3a73
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-896905 image build -t localhost/my-image:functional-896905 testdata/build --alsologtostderr:
I0725 17:44:41.285010   22988 out.go:291] Setting OutFile to fd 1 ...
I0725 17:44:41.285154   22988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:41.285163   22988 out.go:304] Setting ErrFile to fd 2...
I0725 17:44:41.285168   22988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 17:44:41.285365   22988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
I0725 17:44:41.285971   22988 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:41.286497   22988 config.go:182] Loaded profile config "functional-896905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0725 17:44:41.286837   22988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:41.286874   22988 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:41.301902   22988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
I0725 17:44:41.302350   22988 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:41.302928   22988 main.go:141] libmachine: Using API Version  1
I0725 17:44:41.302956   22988 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:41.303283   22988 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:41.303455   22988 main.go:141] libmachine: (functional-896905) Calling .GetState
I0725 17:44:41.305539   22988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0725 17:44:41.305593   22988 main.go:141] libmachine: Launching plugin server for driver kvm2
I0725 17:44:41.320092   22988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
I0725 17:44:41.320603   22988 main.go:141] libmachine: () Calling .GetVersion
I0725 17:44:41.321196   22988 main.go:141] libmachine: Using API Version  1
I0725 17:44:41.321234   22988 main.go:141] libmachine: () Calling .SetConfigRaw
I0725 17:44:41.321692   22988 main.go:141] libmachine: () Calling .GetMachineName
I0725 17:44:41.321941   22988 main.go:141] libmachine: (functional-896905) Calling .DriverName
I0725 17:44:41.322207   22988 ssh_runner.go:195] Run: systemctl --version
I0725 17:44:41.322235   22988 main.go:141] libmachine: (functional-896905) Calling .GetSSHHostname
I0725 17:44:41.324790   22988 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:41.325184   22988 main.go:141] libmachine: (functional-896905) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:fa:41", ip: ""} in network mk-functional-896905: {Iface:virbr1 ExpiryTime:2024-07-25 18:42:07 +0000 UTC Type:0 Mac:52:54:00:67:fa:41 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-896905 Clientid:01:52:54:00:67:fa:41}
I0725 17:44:41.325221   22988 main.go:141] libmachine: (functional-896905) DBG | domain functional-896905 has defined IP address 192.168.39.106 and MAC address 52:54:00:67:fa:41 in network mk-functional-896905
I0725 17:44:41.325379   22988 main.go:141] libmachine: (functional-896905) Calling .GetSSHPort
I0725 17:44:41.325534   22988 main.go:141] libmachine: (functional-896905) Calling .GetSSHKeyPath
I0725 17:44:41.325696   22988 main.go:141] libmachine: (functional-896905) Calling .GetSSHUsername
I0725 17:44:41.325841   22988 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/functional-896905/id_rsa Username:docker}
I0725 17:44:41.415207   22988 build_images.go:161] Building image from path: /tmp/build.1570525081.tar
I0725 17:44:41.415264   22988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0725 17:44:41.424935   22988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1570525081.tar
I0725 17:44:41.429531   22988 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1570525081.tar: stat -c "%s %y" /var/lib/minikube/build/build.1570525081.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1570525081.tar': No such file or directory
I0725 17:44:41.429567   22988 ssh_runner.go:362] scp /tmp/build.1570525081.tar --> /var/lib/minikube/build/build.1570525081.tar (3072 bytes)
I0725 17:44:41.455644   22988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1570525081
I0725 17:44:41.467635   22988 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1570525081 -xf /var/lib/minikube/build/build.1570525081.tar
I0725 17:44:41.477696   22988 crio.go:315] Building image: /var/lib/minikube/build/build.1570525081
I0725 17:44:41.477751   22988 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-896905 /var/lib/minikube/build/build.1570525081 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0725 17:44:44.470155   22988 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-896905 /var/lib/minikube/build/build.1570525081 --cgroup-manager=cgroupfs: (2.992378698s)
I0725 17:44:44.470232   22988 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1570525081
I0725 17:44:44.487948   22988 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1570525081.tar
I0725 17:44:44.504094   22988 build_images.go:217] Built localhost/my-image:functional-896905 from /tmp/build.1570525081.tar
I0725 17:44:44.504125   22988 build_images.go:133] succeeded building to: functional-896905
I0725 17:44:44.504132   22988 build_images.go:134] failed building to: 
I0725 17:44:44.504157   22988 main.go:141] libmachine: Making call to close driver server
I0725 17:44:44.504169   22988 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:44.504426   22988 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:44.504449   22988 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:44.504458   22988 main.go:141] libmachine: Making call to close driver server
I0725 17:44:44.504465   22988 main.go:141] libmachine: (functional-896905) Calling .Close
I0725 17:44:44.504474   22988 main.go:141] libmachine: (functional-896905) DBG | Closing plugin on server side
I0725 17:44:44.504666   22988 main.go:141] libmachine: Successfully made call to close driver server
I0725 17:44:44.504683   22988 main.go:141] libmachine: Making call to close connection to plugin binary
I0725 17:44:44.504705   22988 main.go:141] libmachine: (functional-896905) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.728898668s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-896905
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image load --daemon docker.io/kicbase/echo-server:functional-896905 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 image load --daemon docker.io/kicbase/echo-server:functional-896905 --alsologtostderr: (2.085636213s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image load --daemon docker.io/kicbase/echo-server:functional-896905 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-896905
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image load --daemon docker.io/kicbase/echo-server:functional-896905 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image save docker.io/kicbase/echo-server:functional-896905 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image rm docker.io/kicbase/echo-server:functional-896905 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-896905 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.944553012s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service list -o json
functional_test.go:1490: Took "299.889535ms" to run "out/minikube-linux-amd64 -p functional-896905 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.106:32494
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.106:32494
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "242.380756ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "43.26998ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "239.767842ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "53.382243ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdany-port1537293101/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721929466324033688" to /tmp/TestFunctionalparallelMountCmdany-port1537293101/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721929466324033688" to /tmp/TestFunctionalparallelMountCmdany-port1537293101/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721929466324033688" to /tmp/TestFunctionalparallelMountCmdany-port1537293101/001/test-1721929466324033688
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.664928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 17:44 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 17:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 17:44 test-1721929466324033688
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh cat /mount-9p/test-1721929466324033688
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-896905 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9d4897c4-ba8a-4880-97d1-2ddd1b89fb0d] Pending
helpers_test.go:344: "busybox-mount" [9d4897c4-ba8a-4880-97d1-2ddd1b89fb0d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9d4897c4-ba8a-4880-97d1-2ddd1b89fb0d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9d4897c4-ba8a-4880-97d1-2ddd1b89fb0d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.004263683s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-896905 logs busybox-mount
E0725 17:44:42.436551   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdany-port1537293101/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-896905
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 image save --daemon docker.io/kicbase/echo-server:functional-896905 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-896905
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdspecific-port1687517280/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.041073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdspecific-port1687517280/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "sudo umount -f /mount-9p": exit status 1 (245.335207ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-896905 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdspecific-port1687517280/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T" /mount1: exit status 1 (266.378986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-896905 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-896905 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-896905 /tmp/TestFunctionalparallelMountCmdVerifyCleanup866625005/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/07/25 17:44:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-896905
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-896905
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-896905
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174036 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0725 17:46:58.590225   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 17:47:26.276850   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-174036 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.477816222s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-174036 -- rollout status deployment/busybox: (4.223703209s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-2mwrb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-qqdtg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-wtxzv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-2mwrb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-qqdtg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-wtxzv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-2mwrb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-qqdtg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-wtxzv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-2mwrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-2mwrb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-qqdtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-qqdtg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-wtxzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174036 -- exec busybox-fc5497c4f-wtxzv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174036 -v=7 --alsologtostderr
E0725 17:49:12.057035   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.062433   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.072690   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.092988   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.133285   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.213639   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.374527   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:12.695000   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:13.335882   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:14.616262   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:17.176516   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 17:49:22.297074   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174036 -v=7 --alsologtostderr: (56.167677691s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-174036 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp testdata/cp-test.txt ha-174036:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036:/home/docker/cp-test.txt ha-174036-m02:/home/docker/cp-test_ha-174036_ha-174036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test_ha-174036_ha-174036-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036:/home/docker/cp-test.txt ha-174036-m03:/home/docker/cp-test_ha-174036_ha-174036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test_ha-174036_ha-174036-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036:/home/docker/cp-test.txt ha-174036-m04:/home/docker/cp-test_ha-174036_ha-174036-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test_ha-174036_ha-174036-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp testdata/cp-test.txt ha-174036-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m02:/home/docker/cp-test.txt ha-174036:/home/docker/cp-test_ha-174036-m02_ha-174036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test.txt"
E0725 17:49:32.537520   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test_ha-174036-m02_ha-174036.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m02:/home/docker/cp-test.txt ha-174036-m03:/home/docker/cp-test_ha-174036-m02_ha-174036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test_ha-174036-m02_ha-174036-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m02:/home/docker/cp-test.txt ha-174036-m04:/home/docker/cp-test_ha-174036-m02_ha-174036-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test_ha-174036-m02_ha-174036-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp testdata/cp-test.txt ha-174036-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt ha-174036:/home/docker/cp-test_ha-174036-m03_ha-174036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test_ha-174036-m03_ha-174036.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt ha-174036-m02:/home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test_ha-174036-m03_ha-174036-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m03:/home/docker/cp-test.txt ha-174036-m04:/home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test_ha-174036-m03_ha-174036-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp testdata/cp-test.txt ha-174036-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile106261026/001/cp-test_ha-174036-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt ha-174036:/home/docker/cp-test_ha-174036-m04_ha-174036.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036 "sudo cat /home/docker/cp-test_ha-174036-m04_ha-174036.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt ha-174036-m02:/home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m02 "sudo cat /home/docker/cp-test_ha-174036-m04_ha-174036-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 cp ha-174036-m04:/home/docker/cp-test.txt ha-174036-m03:/home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 ssh -n ha-174036-m03 "sudo cat /home/docker/cp-test_ha-174036-m04_ha-174036-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.462270467s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-174036 node delete m03 -v=7 --alsologtostderr: (16.296117766s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (341.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174036 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0725 18:04:12.056994   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 18:05:35.100452   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 18:06:58.589797   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-174036 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m41.102552354s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (341.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174036 --control-plane -v=7 --alsologtostderr
E0725 18:09:12.056496   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174036 --control-plane -v=7 --alsologtostderr: (1m17.677247812s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-174036 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-177001 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-177001 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.791204028s)
--- PASS: TestJSONOutput/start/Command (55.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-177001 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-177001 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-177001 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-177001 --output=json --user=testUser: (7.370248847s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-153933 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-153933 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.022815ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f26e4050-8573-4cae-84b8-bd2b3020a8f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-153933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"038e890a-9c9f-407a-b252-2a57b2fd47d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"d5af565f-7c13-4363-9a32-d9cac35991cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3389503a-61b7-4526-8f7a-b4ebef32197e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig"}}
	{"specversion":"1.0","id":"a8342e6b-3008-4fd7-8c90-c34a80eb7574","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube"}}
	{"specversion":"1.0","id":"22966c3a-46c1-43a8-9111-fa357138279d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"23fa76d7-aa1e-4241-a4de-d8757b2f55a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b629d1d3-12a0-4940-9730-efdd32489cf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-153933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-153933
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-363244 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-363244 --driver=kvm2  --container-runtime=crio: (38.39967307s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-366413 --driver=kvm2  --container-runtime=crio
E0725 18:11:58.590206   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-366413 --driver=kvm2  --container-runtime=crio: (43.074042108s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-363244
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-366413
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-366413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-366413
helpers_test.go:175: Cleaning up "first-363244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-363244
--- PASS: TestMinikubeProfile (84.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-206788 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-206788 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.620216252s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-206788 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-206788 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-223679 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-223679 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.709763069s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-206788 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-223679
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-223679: (1.269953547s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-223679
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-223679: (21.444518637s)
--- PASS: TestMountStart/serial/RestartStopped (22.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-223679 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (119.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-253131 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0725 18:14:12.056511   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 18:15:01.638807   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-253131 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.891501099s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (119.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-253131 -- rollout status deployment/busybox: (4.448588465s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-4c929 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-gfbkg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-4c929 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-gfbkg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-4c929 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-gfbkg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-4c929 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-4c929 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-gfbkg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-253131 -- exec busybox-fc5497c4f-gfbkg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-253131 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-253131 -v 3 --alsologtostderr: (47.129103395s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-253131 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp testdata/cp-test.txt multinode-253131:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131:/home/docker/cp-test.txt multinode-253131-m02:/home/docker/cp-test_multinode-253131_multinode-253131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test_multinode-253131_multinode-253131-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131:/home/docker/cp-test.txt multinode-253131-m03:/home/docker/cp-test_multinode-253131_multinode-253131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test_multinode-253131_multinode-253131-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp testdata/cp-test.txt multinode-253131-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt multinode-253131:/home/docker/cp-test_multinode-253131-m02_multinode-253131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test_multinode-253131-m02_multinode-253131.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m02:/home/docker/cp-test.txt multinode-253131-m03:/home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test_multinode-253131-m02_multinode-253131-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp testdata/cp-test.txt multinode-253131-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1140125035/001/cp-test_multinode-253131-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt multinode-253131:/home/docker/cp-test_multinode-253131-m03_multinode-253131.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131 "sudo cat /home/docker/cp-test_multinode-253131-m03_multinode-253131.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 cp multinode-253131-m03:/home/docker/cp-test.txt multinode-253131-m02:/home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 ssh -n multinode-253131-m02 "sudo cat /home/docker/cp-test_multinode-253131-m03_multinode-253131-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-253131 node stop m03: (1.385489879s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-253131 status: exit status 7 (403.157467ms)

                                                
                                                
-- stdout --
	multinode-253131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-253131-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-253131-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr: exit status 7 (403.985183ms)

                                                
                                                
-- stdout --
	multinode-253131
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-253131-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-253131-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:16:33.952538   40966 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:16:33.952658   40966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:16:33.952662   40966 out.go:304] Setting ErrFile to fd 2...
	I0725 18:16:33.952667   40966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:16:33.952845   40966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:16:33.953007   40966 out.go:298] Setting JSON to false
	I0725 18:16:33.953030   40966 mustload.go:65] Loading cluster: multinode-253131
	I0725 18:16:33.953079   40966 notify.go:220] Checking for updates...
	I0725 18:16:33.953410   40966 config.go:182] Loaded profile config "multinode-253131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:16:33.953426   40966 status.go:255] checking status of multinode-253131 ...
	I0725 18:16:33.953836   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:33.953898   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:33.972884   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0725 18:16:33.973364   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:33.974048   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:33.974092   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:33.974429   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:33.974635   40966 main.go:141] libmachine: (multinode-253131) Calling .GetState
	I0725 18:16:33.976347   40966 status.go:330] multinode-253131 host status = "Running" (err=<nil>)
	I0725 18:16:33.976365   40966 host.go:66] Checking if "multinode-253131" exists ...
	I0725 18:16:33.976686   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:33.976718   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:33.991276   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0725 18:16:33.991604   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:33.992080   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:33.992106   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:33.992414   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:33.992598   40966 main.go:141] libmachine: (multinode-253131) Calling .GetIP
	I0725 18:16:33.995345   40966 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:16:33.995761   40966 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:16:33.995793   40966 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:16:33.995865   40966 host.go:66] Checking if "multinode-253131" exists ...
	I0725 18:16:33.996151   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:33.996190   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:34.011026   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I0725 18:16:34.011533   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:34.012139   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:34.012163   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:34.012514   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:34.012714   40966 main.go:141] libmachine: (multinode-253131) Calling .DriverName
	I0725 18:16:34.012942   40966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:16:34.012964   40966 main.go:141] libmachine: (multinode-253131) Calling .GetSSHHostname
	I0725 18:16:34.015644   40966 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:16:34.016031   40966 main.go:141] libmachine: (multinode-253131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:aa:de", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:13:45 +0000 UTC Type:0 Mac:52:54:00:9a:aa:de Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-253131 Clientid:01:52:54:00:9a:aa:de}
	I0725 18:16:34.016064   40966 main.go:141] libmachine: (multinode-253131) DBG | domain multinode-253131 has defined IP address 192.168.39.54 and MAC address 52:54:00:9a:aa:de in network mk-multinode-253131
	I0725 18:16:34.016170   40966 main.go:141] libmachine: (multinode-253131) Calling .GetSSHPort
	I0725 18:16:34.016359   40966 main.go:141] libmachine: (multinode-253131) Calling .GetSSHKeyPath
	I0725 18:16:34.016511   40966 main.go:141] libmachine: (multinode-253131) Calling .GetSSHUsername
	I0725 18:16:34.016674   40966 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131/id_rsa Username:docker}
	I0725 18:16:34.095462   40966 ssh_runner.go:195] Run: systemctl --version
	I0725 18:16:34.101190   40966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:16:34.115251   40966 kubeconfig.go:125] found "multinode-253131" server: "https://192.168.39.54:8443"
	I0725 18:16:34.115278   40966 api_server.go:166] Checking apiserver status ...
	I0725 18:16:34.115311   40966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:16:34.130338   40966 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup
	W0725 18:16:34.139765   40966 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1118/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0725 18:16:34.139816   40966 ssh_runner.go:195] Run: ls
	I0725 18:16:34.143887   40966 api_server.go:253] Checking apiserver healthz at https://192.168.39.54:8443/healthz ...
	I0725 18:16:34.147946   40966 api_server.go:279] https://192.168.39.54:8443/healthz returned 200:
	ok
	I0725 18:16:34.147969   40966 status.go:422] multinode-253131 apiserver status = Running (err=<nil>)
	I0725 18:16:34.147981   40966 status.go:257] multinode-253131 status: &{Name:multinode-253131 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:16:34.148001   40966 status.go:255] checking status of multinode-253131-m02 ...
	I0725 18:16:34.148414   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:34.148447   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:34.163287   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
	I0725 18:16:34.163682   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:34.164131   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:34.164152   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:34.164489   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:34.164690   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetState
	I0725 18:16:34.166359   40966 status.go:330] multinode-253131-m02 host status = "Running" (err=<nil>)
	I0725 18:16:34.166387   40966 host.go:66] Checking if "multinode-253131-m02" exists ...
	I0725 18:16:34.166659   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:34.166697   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:34.181260   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0725 18:16:34.181692   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:34.182100   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:34.182122   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:34.182393   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:34.182536   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetIP
	I0725 18:16:34.185467   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | domain multinode-253131-m02 has defined MAC address 52:54:00:77:cc:f4 in network mk-multinode-253131
	I0725 18:16:34.185808   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:cc:f4", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:14:57 +0000 UTC Type:0 Mac:52:54:00:77:cc:f4 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-253131-m02 Clientid:01:52:54:00:77:cc:f4}
	I0725 18:16:34.185841   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | domain multinode-253131-m02 has defined IP address 192.168.39.179 and MAC address 52:54:00:77:cc:f4 in network mk-multinode-253131
	I0725 18:16:34.185957   40966 host.go:66] Checking if "multinode-253131-m02" exists ...
	I0725 18:16:34.186252   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:34.186299   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:34.200706   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44265
	I0725 18:16:34.201092   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:34.201532   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:34.201546   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:34.201808   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:34.201977   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .DriverName
	I0725 18:16:34.202118   40966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:16:34.202134   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetSSHHostname
	I0725 18:16:34.204905   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | domain multinode-253131-m02 has defined MAC address 52:54:00:77:cc:f4 in network mk-multinode-253131
	I0725 18:16:34.205369   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:cc:f4", ip: ""} in network mk-multinode-253131: {Iface:virbr1 ExpiryTime:2024-07-25 19:14:57 +0000 UTC Type:0 Mac:52:54:00:77:cc:f4 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:multinode-253131-m02 Clientid:01:52:54:00:77:cc:f4}
	I0725 18:16:34.205393   40966 main.go:141] libmachine: (multinode-253131-m02) DBG | domain multinode-253131-m02 has defined IP address 192.168.39.179 and MAC address 52:54:00:77:cc:f4 in network mk-multinode-253131
	I0725 18:16:34.205606   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetSSHPort
	I0725 18:16:34.205773   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetSSHKeyPath
	I0725 18:16:34.205930   40966 main.go:141] libmachine: (multinode-253131-m02) Calling .GetSSHUsername
	I0725 18:16:34.206025   40966 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19326-5877/.minikube/machines/multinode-253131-m02/id_rsa Username:docker}
	I0725 18:16:34.283047   40966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:16:34.296224   40966 status.go:257] multinode-253131-m02 status: &{Name:multinode-253131-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:16:34.296286   40966 status.go:255] checking status of multinode-253131-m03 ...
	I0725 18:16:34.296612   40966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0725 18:16:34.296655   40966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0725 18:16:34.312004   40966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0725 18:16:34.312447   40966 main.go:141] libmachine: () Calling .GetVersion
	I0725 18:16:34.312967   40966 main.go:141] libmachine: Using API Version  1
	I0725 18:16:34.312989   40966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0725 18:16:34.313319   40966 main.go:141] libmachine: () Calling .GetMachineName
	I0725 18:16:34.313503   40966 main.go:141] libmachine: (multinode-253131-m03) Calling .GetState
	I0725 18:16:34.315108   40966 status.go:330] multinode-253131-m03 host status = "Stopped" (err=<nil>)
	I0725 18:16:34.315130   40966 status.go:343] host is not running, skipping remaining checks
	I0725 18:16:34.315136   40966 status.go:257] multinode-253131-m03 status: &{Name:multinode-253131-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 node start m03 -v=7 --alsologtostderr
E0725 18:16:58.590443   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-253131 node start m03 -v=7 --alsologtostderr: (38.328014484s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-253131 node delete m03: (1.871300698s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-253131 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0725 18:26:58.589874   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-253131 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m2.28995922s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-253131 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-253131
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-253131-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-253131-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.320497ms)

                                                
                                                
-- stdout --
	* [multinode-253131-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-253131-m02' is duplicated with machine name 'multinode-253131-m02' in profile 'multinode-253131'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-253131-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-253131-m03 --driver=kvm2  --container-runtime=crio: (39.621166067s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-253131
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-253131: exit status 80 (204.636056ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-253131 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-253131-m03 already exists in multinode-253131-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-253131-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.69s)

                                                
                                    
x
+
TestScheduledStopUnix (110.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-567197 --memory=2048 --driver=kvm2  --container-runtime=crio
E0725 18:31:58.591958   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-567197 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.847914127s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567197 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-567197 -n scheduled-stop-567197
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567197 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567197 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567197 -n scheduled-stop-567197
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-567197
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-567197 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-567197
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-567197: exit status 7 (63.845451ms)

                                                
                                                
-- stdout --
	scheduled-stop-567197
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567197 -n scheduled-stop-567197
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-567197 -n scheduled-stop-567197: exit status 7 (55.830068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-567197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-567197
--- PASS: TestScheduledStopUnix (110.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3526420603 start -p running-upgrade-919785 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0725 18:34:12.056467   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3526420603 start -p running-upgrade-919785 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m54.288496995s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-919785 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-919785 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m38.046945089s)
helpers_test.go:175: Cleaning up "running-upgrade-919785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-919785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-919785: (1.173129159s)
--- PASS: TestRunningBinaryUpgrade (215.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.642464ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-896524] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896524 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896524 --driver=kvm2  --container-runtime=crio: (1m25.611468255s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-896524 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.628014557s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-896524 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-896524 status -o json: exit status 2 (233.948688ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-896524","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-896524
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-896524: (1.640753988s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.50s)

                                                
                                    
x
+
TestPause/serial/Start (99.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-669817 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-669817 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m39.09726249s)
--- PASS: TestPause/serial/Start (99.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896524 --no-kubernetes --driver=kvm2  --container-runtime=crio: (51.48401883s)
--- PASS: TestNoKubernetes/serial/Start (51.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-896524 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-896524 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.70876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-896524
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-896524: (1.266902016s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-896524 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-896524 --driver=kvm2  --container-runtime=crio: (22.318152268s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-896524 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-896524 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.631154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3622122322 start -p stopped-upgrade-160946 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3622122322 start -p stopped-upgrade-160946 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.732570525s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3622122322 -p stopped-upgrade-160946 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3622122322 -p stopped-upgrade-160946 stop: (1.402855369s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-160946 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-160946 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.563412829s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-889508 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-889508 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (112.945878ms)

                                                
                                                
-- stdout --
	* [false-889508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:38:00.153109   52585 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:38:00.153499   52585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:38:00.153511   52585 out.go:304] Setting ErrFile to fd 2...
	I0725 18:38:00.153517   52585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:38:00.153814   52585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-5877/.minikube/bin
	I0725 18:38:00.154555   52585 out.go:298] Setting JSON to false
	I0725 18:38:00.155584   52585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4824,"bootTime":1721927856,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0725 18:38:00.155642   52585 start.go:139] virtualization: kvm guest
	I0725 18:38:00.157527   52585 out.go:177] * [false-889508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0725 18:38:00.159290   52585 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:38:00.159352   52585 notify.go:220] Checking for updates...
	I0725 18:38:00.161662   52585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:38:00.162732   52585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-5877/kubeconfig
	I0725 18:38:00.163847   52585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-5877/.minikube
	I0725 18:38:00.165125   52585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0725 18:38:00.166315   52585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:38:00.168011   52585 config.go:182] Loaded profile config "kubernetes-upgrade-069209": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0725 18:38:00.168208   52585 config.go:182] Loaded profile config "pause-669817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0725 18:38:00.168349   52585 config.go:182] Loaded profile config "stopped-upgrade-160946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0725 18:38:00.168457   52585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:38:00.214295   52585 out.go:177] * Using the kvm2 driver based on user configuration
	I0725 18:38:00.215670   52585 start.go:297] selected driver: kvm2
	I0725 18:38:00.215684   52585 start.go:901] validating driver "kvm2" against <nil>
	I0725 18:38:00.215697   52585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:38:00.217841   52585 out.go:177] 
	W0725 18:38:00.218943   52585 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0725 18:38:00.220110   52585 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-889508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-889508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-889508"

                                                
                                                
----------------------- debugLogs end: false-889508 [took: 2.960841266s] --------------------------------
helpers_test.go:175: Cleaning up "false-889508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-889508
--- PASS: TestNetworkPlugins/group/false (3.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-160946
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (122s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-371663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-371663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m2.001684719s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (122.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-600433 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-600433 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (58.439659566s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-371663 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9a19fc6a-6194-4c15-8414-a7c7da162bce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9a19fc6a-6194-4c15-8414-a7c7da162bce] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005370214s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-371663 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-371663 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-371663 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [700149fa-1af8-429d-b3c3-f47b06c7e4f0] Pending
helpers_test.go:344: "busybox" [700149fa-1af8-429d-b3c3-f47b06c7e4f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [700149fa-1af8-429d-b3c3-f47b06c7e4f0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004617842s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-600433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-600433 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-819413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-819413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (48.451267424s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-819413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-819413 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-819413 --alsologtostderr -v=3: (10.443120006s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819413 -n newest-cni-819413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819413 -n newest-cni-819413: exit status 7 (63.277047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-819413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-819413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-819413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (38.1690412s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819413 -n newest-cni-819413
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-819413 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-819413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-819413 --alsologtostderr -v=1: (1.675832011s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819413 -n newest-cni-819413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819413 -n newest-cni-819413: exit status 2 (315.853933ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819413 -n newest-cni-819413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819413 -n newest-cni-819413: exit status 2 (291.439743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-819413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819413 -n newest-cni-819413
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819413 -n newest-cni-819413
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (59.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-646344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-646344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (59.67139928s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (59.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (661.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-371663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-371663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m1.309055177s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-371663 -n no-preload-371663
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (661.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-600433 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-600433 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m28.47931527s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-600433 -n default-k8s-diff-port-600433
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (568.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-646344 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a7da430-e23b-4464-81b8-46671459aca5] Pending
helpers_test.go:344: "busybox" [4a7da430-e23b-4464-81b8-46671459aca5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a7da430-e23b-4464-81b8-46671459aca5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004416331s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-646344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-646344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-646344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-108542 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-108542 --alsologtostderr -v=3: (4.284113233s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108542 -n old-k8s-version-108542: exit status 7 (62.791956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-108542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (422.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-646344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0725 18:48:21.640484   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 18:49:12.057770   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 18:51:58.590459   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 18:54:12.056650   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-646344 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (7m1.918903787s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-646344 -n embed-certs-646344
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (422.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m35.986550456s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m15.345753098s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b2ptc" [10785505-d5eb-46ee-a837-aab5868a1919] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b2ptc" [10785505-d5eb-46ee-a837-aab5868a1919] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003568873s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8xhc9" [34a5eade-1f3b-488c-9dd2-13ae10c9622c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004664074s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qh4rk" [54238604-74d9-4b45-8339-5623c126e136] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qh4rk" [54238604-74d9-4b45-8339-5623c126e136] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004706948s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (88.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m28.326775005s)
--- PASS: TestNetworkPlugins/group/calico/Start (88.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m43.273635366s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0725 19:11:37.068081   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.073356   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.083678   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.103937   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.144310   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.224664   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.385099   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:37.706125   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:38.347291   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:39.627896   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:42.188137   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:47.308681   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:57.548824   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:11:58.590298   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/addons-377932/client.crt: no such file or directory
E0725 19:12:04.986294   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:04.991621   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.001937   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.024431   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.064743   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.145520   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.305933   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:05.626128   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:06.266528   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:07.547725   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:10.107918   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:15.103419   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/functional-896905/client.crt: no such file or directory
E0725 19:12:15.228683   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:12:18.029716   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/no-preload-371663/client.crt: no such file or directory
E0725 19:12:25.469104   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m32.113800404s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2g9sq" [5a2206ae-f255-4eb2-a83b-6e05a9ab6ca4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005679684s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tkq69" [dd71beb4-be91-41d5-b750-dfd272f04566] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 19:12:45.950150   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-tkq69" [dd71beb4-be91-41d5-b750-dfd272f04566] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004261454s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4n4lx" [30afb229-ac76-4afc-819a-ab6c7bdf2344] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4n4lx" [30afb229-ac76-4afc-819a-ab6c7bdf2344] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004200092s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lvblk" [81788fba-7525-422f-9f97-1ad95020aeb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lvblk" [81788fba-7525-422f-9f97-1ad95020aeb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004272085s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-646344 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-646344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344: exit status 2 (277.41142ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-646344 -n embed-certs-646344: exit status 2 (289.609962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-646344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-646344 --alsologtostderr -v=1: (1.031123876s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-646344 -n embed-certs-646344
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-646344 -n embed-certs-646344
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.21s)
E0725 19:13:26.910416   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/default-k8s-diff-port-600433/client.crt: no such file or directory
E0725 19:13:35.965280   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.460301204s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (32.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-889508 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-889508 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.167745286s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-889508 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-889508 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155954087s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (32.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0725 19:13:15.483879   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.489152   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.499447   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.519779   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.560136   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.640533   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:15.800951   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:16.121510   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:16.761983   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:18.043057   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
E0725 19:13:20.603582   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-889508 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m16.432244419s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n9hrd" [da46dbad-e296-485d-8ea8-dc57d8819a65] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004468086s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qfbnw" [5cfc6295-b48c-4f8c-8ad5-4002849d7760] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qfbnw" [5cfc6295-b48c-4f8c-8ad5-4002849d7760] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003664755s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-889508 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-889508 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8rqr9" [4c5690d3-6968-4c92-9718-4117cd0cff28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 19:14:37.406897   13059 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/old-k8s-version-108542/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-8rqr9" [4c5690d3-6968-4c92-9718-4117cd0cff28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005980277s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-889508 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-889508 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    

Test skip (40/322)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
268 TestStartStop/group/disable-driver-mounts 0.14
283 TestNetworkPlugins/group/kubenet 3.49
291 TestNetworkPlugins/group/cilium 5.22
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-045154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-045154
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-889508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19326-5877/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Jul 2024 18:37:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.203:8443
name: pause-669817
contexts:
- context:
cluster: pause-669817
extensions:
- extension:
last-update: Thu, 25 Jul 2024 18:37:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-669817
name: pause-669817
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-669817
user:
client-certificate: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/client.crt
client-key: /home/jenkins/minikube-integration/19326-5877/.minikube/profiles/pause-669817/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-889508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-889508"

                                                
                                                
----------------------- debugLogs end: kubenet-889508 [took: 3.335536151s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-889508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-889508
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-889508 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-889508" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-889508

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-889508" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-889508"

                                                
                                                
----------------------- debugLogs end: cilium-889508 [took: 5.0749671s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-889508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-889508
--- SKIP: TestNetworkPlugins/group/cilium (5.22s)

                                                
                                    
Copied to clipboard